[FOM] on Kieu from Andrew Hodges
Martin Davis
martin at eipye.com
Tue May 11 18:27:34 EDT 2004
More comment from Andrew Hodges on the 'Quantum adiabatic algorithm
for Hilbert's tenth problem' - quant-ph/0310052 v2 (Tien D Kieu):
I am very grateful to Dr Kieu for his comments. On the question of the
'adiabatic theorem,' I quite see that the suggestion I made in my earlier
contribution was no good at all in examining the validity of its
application in this case. Some other comments of mine were also off the
point. However, I am going to address in more detail another technical
aspect of Dr Kieu's proposal, and then expand on the general point about
infinite precision which lies at the heart of this question.
On the technical level, I am afraid that it needs an expert to check Dr
Kieu's claim about the persistence of a non-zero energy gap during the
adiabatic process. I still doubt, for the reasons I gave before, whether it
is really correct. Later I will also say why it seems to me doubtful
whether his 'finite time,' even if technically valid, actually has any
significance. But first I will consider another, simpler, problem with Dr
Kieu's method where I am quite sure his technical machinery is inadequate.
This is the problem of the possible 'degeneracy of the final ground state:
that there may be many, indeed infinitely many, values on which polynomial
being tested takes its least value. An example is the polynomial (2*m^2 -
n^2)^2, which is never zero but takes the value 1 for (m,n)= (0,1), (1,1),
(2,3), (5,7), (12,17), (29,41), (70, 99)... Dr Kieu's system is required to
settle on a 'ground state' which gives this least value of 1, but if there
are many solutions giving this least value then his theorem will not apply.
His answer (page 4) is to make a small perturbation in the final
Hamiltonian so as to separate the degenerate states, of the form:
H = P(a*a) + gamma (a*) + gamma* (a)
Here a* is the creation operator, a the conjugate annihiliation operator,
so that a*a is the number operator. P is the polynomial being tested, and
gamma is a small complex parameter.
Given a polynomial, we have in general no idea whatever whether this
degeneracy occurs. Therefore, making allowance for this possibility is a
not a side issue or a special case; it must be a central part of the
general procedure. Unfortunately, although Dr Kieu says that this gamma can
be introduced and then tend to zero, there is no indication in his sequence
of steps for when or how this limiting procedure is to be done. So I cannot
assess how it actually supposed to form part of the general procedure.
But whatever the procedure is supposed to be, there is a serious problem
with it. The problem is that no value of gamma can give a 'small'
perturbation. Typically, the ground state is supposed to arise when a huge
value of the occupation number N gives rise to a very small value of P(N).
[In what follows I will write a single integer N for what actually is a
finite set of integers, e.g. the two values (m,n) in the example given
above. The distinction between functions of a single integer and functions
of many integers is of no importance in this context.] The example above
shows how N can be arbitrarily large. Then the perturbed H acting on the
state |N> gives a result like
P(N) |N> + gamma sqrt(N+1) |N+1> + gamma* sqrt(N) |N-1>
so that the 'small perturbation' terms overwhelm the original term. No
matter how small gamma is made, there will be (infinitely many) such
eigenstates. Thus in general the perturbed eigenstates and eigenvalues will
not be close to the unperturbed ones. This completely wrecks the theory of
finding the ground state.
What is most important here is the general problem of which this is
symptomatic. The problem is that the model requires an infinite precision
to work, and that the slightest change to it, as with the 'small gamma',
will completely wreck it. So I will now focus on this infinite precision.
First, I will return to the point made by Martin Davis about the need to
encode the polynomial itself with infinite precision. Dr Kieu argues
against this criticism that there is no precision problem with an integer
coefficient. The trouble is that his Hamiltonian is not actually a purely
number-theoretic quantity. The apparently integer-valued operator P(a*a)
has to be multiplied by an energy to make it into a Hamiltonian - this is
essential to a connection with time, which is the whole point of the
theory.
So really the term is P(a*a) E, where E is some unit energy. There is no
natural integer value for energy, so E must be regarded as taking its value
from the continuum of reals. The inescapability of using the continuum is
emphasised by the fact that during the adiabatic process, this Hamiltonian
must be multiplied by the number t/T, continuously varying from 0 to 1.
Accordingly, the integer values of the polynomial have to be physically
realised as real numbers: 2 has to be translated by analog means into
2.000000000... If there is the slightest inaccuracy in the value of E then
this realisation is wrecked. For instance, in the standard example (2*m^2
- n^2)^2, there is an E associated with the oscillator for the m variable
and an E associated with the oscillator for the n variable. If these should
differ by 1 part in 10^(10^(10^10))) (let us say) then as Martin Davis
pointed out, the resulting polynomial will yield completely false minima.
Moreover the dimensionless number operator a*a, which looks harmless
enough, is in fact the operator (-L^2) d^2/dx^2 + L^(-2) x^2 , where L is
the characteristic length of the harmonic oscillator. To get integer values
of N, L must be known to infinite precision. The Nth state of the harmonic
oscillator consists of a N peaks and troughs packed into the characteristic
length L (this is just the nature of the Hermite functions.) Thus the
application of the number operator amounts to the correct identification of
wavelengths of unboundedly small length L/N. If there is any imprecision in
L then, for all N greater than some value, the results will be completely
wrong.
In my earlier comment, I raised the question of how this physical system
can possibly be doing the unboundedly lengthy work of evaluating the
polynomial for all N. This picture of the Hermite functions gives us an
answer: the theory rests on the assumption that we can apply unboundedly
precise operators in encoding and reading a number N as a system of N peaks
and troughs packed into a finite box. Also, the system needs unboundedly
high frequencies, since the Nth occupancy states oscillates with period
inversely proportional to P(N). So longer and longer classical computing
times are replaced by finer and finer precision both in space and in time.
This is not really very different from classical schemes which demand
faster and faster operation as computing proceeds, so as to do an infinite
amount of computing in a finite time. In fact, I question whether Dr Kieu's
finite time T, even if formally correct, has really any significance. This
time is not finite as measured in any natural unit, for the natural units
of time for the computation are unboundedly small. Thus it is quite unfair
to compare it with the time taken by a Turing machine operating with fixed
unit step times.
Dr Kieu would say, I think, that there *is* a major difference from
classical schemes in that his scheme exploits the *superposition* possible
in quantum mechanics, in analogy to the Shor factorization algorithm. This
analogy seems to me misleading. In Shor's work there is a finite number n
of q-bits which, as is well understood, can do the work of 2^n classical
bits. They operate in parallel for a finite time using a system which
beautifully exploits this exponential gain. There is no infinite precision
required, and all the q-bits go through essentially the same process. It is
amenable to error analysis and error correction. Dr Kieu's system, in
contrast, requires loading an infinite amount of information into a finite
quantum system. There is no room for any error whatever. Infinitely many
different superimposed modes work in parallel, and they are not at all on
the same footing: the work of the Nth mode demands greater and greater
precision as N increases. The overall difficulty of the process is at least
as great as the supremum of the difficulty of all these superimposed modes
- and this is infinite. Superposition does not get rid of this infinity.
Dr Kieu has also suggested that there are results in quantum computation
demonstrating other miraculously fast operations, in particular referring
to one in which the time taken for a search is independent of the number of
searched terms. However, reference to quant-ph/0309021 shows that this
constant time is offset by the greater energy required, and that this
energy becomes infinite as the number of terms tends to infinity.
We should also consider the question of simulating the whole system on a
classical computer. Dr Kieu's claim is that the probabilistic nature of
quantum mechanics means that there is no conflict with classical
unsolvability theorems. This seems to rest on a basic confusion. If one
could solve the Schroedinger equation for his system on a computer in a
finite time, then there would be no probabilistic element in his system at
all. Probability comes into quantum mechanics through the measurement
operation, which cannot capture the full content of the wave-function. But
the evolution of the wave-function through the Schroedinger equation is
entirely deterministic, and if you can calculate it, then you don't need to
make any such measurements - you have all the (complex-valued) amplitudes
of the states available for inspection at any stage. On page 5, Dr Kieu
writes that 'the probabilistic nature of the algorithm is manifest
differently [in classical computation] through the extrapolation to zero
step size in solving the Schroedinger equation.' The only sense I can make
of this is that he considers that the errors introduced in using a non-zero
step size are related to the probabilities in the quantum mechanical
measurement. But these two things are completely different and independent.
It is fairly obvious that choosing a finite step size is equivalent to
considering the system only up to some finite value of N. (If the mesh size
of the simulating calculation is L/N , then it cannot possibly encode those
wavelengths shorter than L/N that correspond to higher values of N.) Thus
the error introduced by classical simulation is nothing whatever to do with
the probabilities that enter into quantum measurement - it is the error
that flows from truncating the system at a finite value of N. The result of
such a truncation is equivalent to evaluating the polynomial for all values
up to N, and finding the least value assumed by it in that range: something
obviously and trivially computable.
This is also a place to mention that truncating Dr Kieu's method at the Nth
occupation number is in no way analogous to the truncation of a classical
computation by allowing only M squares of Turing machine tape. If there
exists an algorithm for some classical computation, then the restriction to
M squares can be arranged to send a signal, when that resource has been
exhausted, that the computation cannot be completed. Provision of further
squares of tape, will, eventually, result in an exact and correct
conclusion in finite time. In contrast, restriction to N occupation
numbers simply amounts to abandoning the Hilbert problem for a different
problem, viz. finding the least value of the polynomial in that finite
range. If there is in fact no zero of the polynomial then fresh trials with
greater values of N will not make any difference: the actual Hilbert
problem under consideration will never be settled.
In some final remarks I shall try to put this 'infinite precision' question
in a wider context. First, it has nothing to do with the 'uncertainty
principle', nor indeed is it specifically quantum mechanical. The problem
behind any proposal such as Dr Kieu's is that it rests on something that is
not a physical model of the mathematical idea being studied, but instead on
a sort of physical anti-model. By this I mean that the crucial features of
Dr Kieu's theory depend on those aspects of a mathematical model which
normally one regards as totally irrelevant to the physical meaning, purpose
and validity of the model. We don't say a theory of the hydrogen atom is
invalidated because there isn't room in the universe for the
10^(10^(10^(10^))))th energy level, or because the gravitational wave
caused by a neutrino in a faraway galaxy spoils the precision of the
electron spin. But Dr Kieu's model absolutely depends on having every such
energy level and just such precision - and indeed infinitely more!
Evaluating the polynomial up to some value N, or simulating the
Schroedinger equation for some finite mesh size, is not an approximation to
solving the Hilbert problem. It is *no approximation at all.* The whole
point of the absolute unsolvability is that given any N, however huge, or
indeed any computable function of the polynomial coefficients, however
ridiculously gigantic, there will be (infinitely many) polynomials whose
first zero lies beyond that point. The *entire content* of the problem
being studied here lies in finding those ridiculously huge numbers. 'Small'
numbers - i.e. those less than a computable bound - are of no relevance or
use at all. But these ridiculously huge numbers correspond to looking at
the most ridiculous extrapolations of the physical model, far beyond
anything where its application can be controlled.
For this reason Dr Kieu's numerical simulations, and his use of the 'gamma'
perturbation such as is used in quantum chemistry, are irrelevant. They are
appropriate indeed for quantum models in genuine physics, where the core
material being studied is within a finite range, and absolute exactitude is
not required. They are useless for this study of the infinite, in which the
usual modelling assumptions are turned on their head. The only physical
model that could incorporate the infinite precision needed in this theory
is one that involves no approximations at all: some viable piece of a
'theory of everything.'
To call Dr Kieu's model a 'solution' or 'algorithm' seems to me highly
misleading. In fact, a case could be made that it makes Hilbert's Tenth
problem *harder*. We have a problem which we know needs computation with an
infinite set of integers. Dr Kieu transforms it into a problem which
involves infinitely precise analog computation with an infinite set of
complex numbers (one for each occupancy state of the oscillator) - a much
larger cardinality, the same as the space of continuous functions on a
compact set.
Nevertheless, it is possible that the model, when analysed correctly, may
bring to light some genuine features of the very interesting adiabatic
property. It might be that, properly considered, it opens interesting
mathematical connections between uncomputability and differential
operators. It might also stimulate genuine questions about what aspects of
quantum computing are really 'quantum' and which stem from its analog
computing elements.
-Andrew Hodges
More information about the FOM
mailing list