[FOM] Tien Kieu's reply to Elena, Lorenz, Davis and Hodges.

Toby Ord toby.ord at philosophy.oxford.ac.uk
Wed Apr 7 14:00:46 EDT 2004


Tien Kieu was quite interested to read the recent arguments appearing 
here in relation to his proposed method for hypercomputation via the 
Quantum Adiabatic Theorem. Since he is not on the FOM list, he has sent 
me a copy of his replies to these arguments to post in his stead,

Toby.




1)-------------------------------------------------

I am grateful for the attention and time that Andrew Hodges has paid to 
the algorithm.  Below are my remarks and responses, interspersed with 
the original comments by Hodge.

> "Kieu avoids the problem of demanding the running of a Turing machine 
> for
> infinitely many steps by using the equivalence with the problem of 
> finding
> to solutions to a Diophantine problem. Thus, he is evaluating a 
> polynomial
> for infinitely many arguments instead.  I can't believe that this 
> reformulation can work magic, and allow an infinite amount of 
> computation to be done in a finite time.

- "Evaluating ... for infinitely many arguments" is and/or conjures up 
a classical visualisation for us to comprehend the search in our 
limited experience.  Unfortunately, this visualisation is not supported 
by the quantum adiabatic theorem, upon which our algorithm is based.  
Essentially, the theorem states that with the condition of discreteness 
of the eigenvalues, continuity and differentiability of the associated 
eigenfunctions and no level crossing then the initial state prepared to 
be an eigenstate of the initial Hamiltonian will approach the 
corresponding eigenstate of the final Hamiltonian in a *finite* time:  
The slower (but still finite) the temporal rate, the higher the 
`approach' probability.  This theorem has been mathematically proved 
for infinite dimension Hilbert spaces (see, for example, Messiah's book 
on Quantum Mechanics).  I have simply exploited this amazing ability 
for our algorithm to locate a single state (the ground state) among an 
infinite number of states in a finite time.  Thus, the hard-to-believe 
thing would be not our algorithm entirely but also the (proven) theorem 
itself.

> His Hamiltonian operating on the nth occupation number state must be 
> equivalent to putting n particles through a physical process, and one 
> would expect this to take something like n times as long as putting 
> one particle through the process.

- This expectation is once again originated from a classical way to 
visualise the process, which is not supported by quantum mechanics.  In 
fact, this expectation is in direct opposition to the principle of 
linear superposition, which has been exploited for the `parallel' 
processing power of quantum computation as demonstrated in the (less 
controversial) algorithm by Shor for factoring integers.

> (Kieu's method seems to make no essential use of the discreteness of 
> the polynomial coefficients, only of the integer variables on which it 
> is evaluated.)   The trouble with this is, as you have pointed out, 
> that the coefficients require *infinite precision*. The Hamiltonian 
> H=(2m^2 - n^2)^2 has infinitely many states with 'energy' 1, and none 
> with 'energy' 0. But H will have infinitely many zeroes if that 
> coefficient 2 is allowed to vary by any amount whatever.  Infinite 
> precision is not 'effective'.  Maybe we *could* insist that the 
> Hamiltonian be built entirely out of operators with discrete integer 
> eigenvalues (exploiting the fact that the Diophantine polynomial has 
> integer coefficients). This would get us out of the need for infinite 
> precision. But in this case, it seems to me that it would be 
> equivalent to performing, by quantum operations, bit-by-bit operations 
> of unary arithmetic on the input integer n. Basically, this would be 
> just as complicated as putting a T-machine through n steps, and we are 
> back to the problem of Calude's 'operator.'
> At the very least, some argument should be supplied by Kieu to explain 
> why
> in fact, despite these obvious difficulties, his Hamiltonian operator 
> could
> be implemented in an effective way in a finite time.

-  The discreteness of the polynomial coefficients is essential, see 
below in the discussion of \sqrt{2}.  However, I am not convinced that 
in a simulation or in a physical implementation of the algorithm one 
would need infinite precision just to represent an *integer* such as 2.

> More generally, there is very little discussion of the 'infinite 
> limit' in
> Kieu's paper, even though infinitude is *of the essence* in this
> discussion, and finite approximations are of no real value. A lot of 
> his
> theory is for two-state cases, with a rather vague assertion that this
> suffices because eventually two states will dominate.  I suspect that 
> the infinite limit doesn't actually work for his theory as he thinks. 
> As his theory never seems to use the fact that the Hamiltonian is a 
> integer-valued polynomial, as far as I can see it would work just as 
> well if it were real-valued. (Or have I missed something in the theory 
> of adiabatic cooling?) Thus we could apply it to ((sqrt(2)*m - n))^2 
> which as m and n range over integers, can never vanish, but attains 
> arbitarily small values for sufficiently large m and n. How would his 
> idea of converging to the 'ground state' apply in this case? There are 
> infinitely many states of which the infimum is 0, but 0 is never 
> attained. I suspect that this might show up a difficulty in the 
> infinite limit.

- The case of \sqrt{2} is not applicable for our algorithm because of 
the two following reasons, one mathematical and the other operational.  
Mathematically, the quantum adiabatic theorem, on which we base our 
algorithm, requires the assumption of all the eigenvalues being 
*discrete*.   With \sqrt{2}, on the other hand, the final Hamiltonian 
would have zero as the limiting point for its eigenvalues:  Given an 
arbitrarily small \epsilon > 0, there always exists countably many 
eigenvalues which are less than \epsilon (and greater than zero).  
Thus, it is not obviously clear whether quantum adiabatic theorem is 
still valid with \sqrt{2} or not.  Operationally, furthermore, we have 
to be able to classically set up the computing system before starting 
any quantum (adiabatic) computation.  With \sqrt{2} we would have 
needed (i) infinite precision in the sense that *all* the digits of 
\sqrt{2} are readily available (not necessarily all digits at once but 
could be one by one); and (ii) *infinite time* to classically "read in" 
*all* the infinitely many digits of \sqrt{2} before the quantum 
computation starts.  (The "read in" is a classical act, either through 
measurement or through the supply from a Turing machine in a simulation 
of the quantum algorithm, and thus would require an infinite amount of 
time to process all the digits - in contrast to quantum processes in 
which many can be processed at once in parallel as embodied in the 
superposition principle.) These two reasons separate the irrational 
coefficients of general equations from the rational coefficients of 
Diophantine equations, and perhaps limit our algorithm to the latter - 
thus providing a way out to the "counter example" of \sqrt{2}.

> There is certainly something very odd about his assertions on page 6 
> about
> 'truncating' the number of states according to which Diophantine 
> equation
> one is looking at. The trivial example of H=(2m^2 - n^2)^2 shows that 
> the
> lowest energy states (H=1) can continue forever. How then can any
> 'truncation' be valid? More deeply, his remark goes against the whole
> spirit of a method which must to applicable to any and every 
> Diophantine
> equation, as it suggests taking some first step which depends on 
> inspecting
> the equation to be considered. But worse, we know from computability 
> theory
> that there is no algorithm which, supplied with a general Diophantine
> equation, can give you a bound on its solutions. So any such inspection
> would be a waste of time. I confess I have not been able to see whether
> this discussion of 'truncation' is essential to his argument about 
> taking
> the infinite limit. But if it is not, I cannot see why he brings it 
> up, and
> if it is, then it must render his argument invalid.

-  You are right, truncation is not essential to the algorithm at all.  
The only reason that I mentioned truncation was that I wanted to 
anticipate and prepare the ground for a discussion on simulations, on 
classical computers, of the algorithm in the next paper.  Such 
simulations would require some *controllable* truncation. (With 
hindsights, I should not, perhaps, have mentioned truncation in the 
present paper.)

> In the concluding section, Kieu claims that there is no conflict with 
> the
> classical insolubility of the problem. Kieu claims that probability 
> makes
> all the difference. But the classical (Turing) argument shows the
> non-existence of a halt-testing machine by applying it to itself. If, 
> as
> Kieu suggests, his quantum-theoretic test could be made into a 
> classical
> procedure (by simulating the solution of the Schroedinger equation on a
> computer), then it looks as if you could still get a similar 
> contradiction
> by applying it to itself - and getting a probability which is both more
> than 1/2 and less than 1/2. Anyway, I do not see that probability *in
> itself* makes all the difference. Thus this section does not convince 
> me of
> the lack of conflict and I am more inclined to suspect something is 
> wrong
> with the theory as applied to infinitely many states.

- I am sorry that I have failed in convincing a reader in this aspect.

> If there *is* a solution to a Diophantine equation, there is no 
> problem -
> you can do it trivially just by ordinary computation over all integer
> values until you reach the solution in a finite time. In this case Kieu
> offers no improvement on the obvious. But his positive claim is that 
> for
> each insoluble equation there is a time T, after which you know that 
> there
> is near-certainty of there being *no* solution for any value, however
> large. This suggests that after time T, the system has 'sensed' the
> non-existence of any zero at or above 2^(2^(2^(2^.....[as many as you
> like].....2^T)))...

-  Yes, such "suggestion" is correct, even though it is 
counterintuitive (as counterintuitive as with many other quantum 
mechanical results).  This is a result implied by the quantum adiabatic 
theorem, which states that after certain time (this time here is the 
inverse of the rate of change of the time-dependent Hamiltonian) the 
probability to obtain the final ground state is not less than a given 
real number (between 0 and 1).  The required time is of course a 
function of this real number, even though the theorem does not specify 
this function (otherwise, our task of deciding solutions for 
Diophantine equations would have been somewhat simpler).  Disputing 
this "suggestion" would thus entail disputing the quantum adiabatic 
theorem itself, and not just our algorithm.

> I find this hard to believe. The adiabatic cooling method which is the
> basis of Kieu's idea, is claimed only to give a square-root speed-up in
> search routines, the same speed-up as comes from more conventional
> quantum-computational schemes. How is it then that it can speed up an
> infinite search *more than exponentially* - and so arrive at a finite 
> time?

- Firstly, the claim of square-root speed-up in search routines is 
derived (by Roland and Cerf, PRA 65 (2002) 042308) only for 
*particular* quantum adiabatic processes, with particular initial state 
and with particular class of the time-dependent Hamiltonians (governing 
the quantum adiabatic processes).  This claim is certainly *not* 
universal for all searching Hamiltonians as the search time should be 
dependent on the path connecting the initial to the final Hamiltonians  
(see Farhi, Goldstone and Gutmann in 
http://arxiv.org/abs/quant-ph/0208135 ).  As a matter of fact, there 
exists certain class of Hamiltonians (see Bae and Kwon in 
http://arxiv.org/abs/quant-ph/0205048, Eryigit and Eryigit in 
http://arxiv.org/quant-ph/0309021, and also Das, Kobes and Kunstatter 
in J Phys A 36 (2003) 2839) that results in a finite and constant 
searching time, *independent* of the size of the database to be search! 
  Secondly, it should be pointed out here that the time-dependent 
Hamiltonian in our algorithm is not the same as the one employed in 
Grover's search."

I sincerely thank Martin Davis and Andrew Hodges for their interest in 
and their questions and comments on the algorithm.  I do believe that 
the problem under consideration is too important to not have a thorough 
discussion and investigation.  I would thus be grateful for any further 
comment.


2)-------------------------------------------------

In the long and critical writings of Elena and Lorenz about the 
proposed algorithm, I can only identify the main objection that the 
algorithm cannot be realised physically because of (thanks to) the 
energy-time uncertainty principle.

I myself, with obvious vested interests in the problem, would love to 
find out why such proposal cannot be implemented physically.  However, 
unfortunately, the reason given above by Elena and Lorenz is flawed 
because of a serious misunderstanding of the uncertainty principle as 
applied to the algorithm.

On the one hand, we can understand the uncertainty principle as 
follows.  The system to be measured is prepared in a state, then is let 
to evolve in time subjecting to certain Hamiltonian, then its energy is 
measured to obtain some value E_I (which, by quantum mechanical 
postulate, is an eigenstate of the Hamiltonian at the time of 
measurement).  We repeat the whole cycle (of preparing the system in 
the *same* state, letting it evolve in the *same* Hamiltonian, before a 
final measurement of its energy) for N times, N being large.  Out of 
that large N, we would get the value E_1 for the measured energy for 
N_1 times, say, and the value E_2 for N_2 times, etc.  The ratio N_i/N 
will tend to the probability that we get E_i in a measurement.  With 
these probabilities we then can deduce the average energy \bar{E} and 
its standard variance \Delta E.  The energy-time uncertainty principle 
states that the product of this energy variance and the measuring time 
(taken to be the same in each repetitive measurement) has to be greater 
than \hbar divided by 2.  That is, in order to reduce the energy 
variance/spread we would have to increase the measuring time 
accordingly (such measuring time is the interaction time between the 
system being measured and the measuring apparatus).  That is all to the 
uncertainty principle (which is quite different from the other 
uncertainty principle concerning position and momentum).

Seen in this light, the energy-time uncertainty principle has nothing 
to do with our algorithm, as the algorithm does not concern with the 
spread/variance of the energy measured.  (The measuring time involved 
in the uncertainty principle is NOT the evolution time of our 
algorithm.) The algorithm only refers to the probabilities (as 
indicated by the ratio N_i/N above).  If in any *single* measurement we 
obtain E_k = 0 then we can immediately stop the whole thing and declare 
that the Diophantine equation under consideration has a solution.  It 
is only the case of no solution that we would need to carry out the 
repetition above to obtain the probabilities in order to identify the 
ground state as the one that corresponds to a probability of more than 
one-half.  (After each measurement, we then go back and prepare our 
system in the coherent state and repeat the cycle again - thus, I am 
quite mystified by the remark by Elena that measurement would affect 
our process?  In which way?)

In summary, the uncertainty principle cannot be invoked here to rule 
out the possibility of some physical implementation of our algorithm.  
Up to now, we cannot find any physical principle that prohibits such a 
physical implementation.  On the other hand, physical implementation 
aside, we have indicated that a simulation of the algorithm on 
classical computers is indeed also possible.


3)-----------------------------------------------

I would like to add two further (but related) points applicable to both 
the sections referring to Elena-Lorenz and Davis-Hodges:

a)  With integer coefficients, the final Hamiltonian has integer-valued 
eigenvalues; that is, the separation between any two eigenvalues is *at 
least* ONE unit.  This is to be contrast to the case of \sqrt{2}, say, 
in which there is no non-zero lower bound for the gap between 
eigenvalues and thus might require infinite precision measurement.

b)  Thanks to this finite gap (of at least one unit) with Diophantine 
equations, the precision in energy measurement, \Delta E, is thus only 
required to be (substantially) less than one unit in order to 
distinguish between the integer-valued eigenvalues.  That is, according 
to the energy-time uncertainty, \Delta E \Delta t > \hbar/2, the 
*measuring time* \Delta t need only be *finite* for such a *non-zero* 
\Delta E.


Sincerely,

Tien Kieu





More information about the FOM mailing list