[FOM] Status of Quantum Computing vis-a-vis Chomsky Hierarchy/Computing
duraid at fl.net.au
Wed Oct 16 22:54:13 EDT 2002
Taking each in turn:
> There are 10 things: (page 2)
> *1. An algorithm is given as a set of instructions of finite size.
Nothing different here. A quantum algorithm is given as a finite set of
unitary transformations on (portions of) a Hilbert space.
> *2. There is a computing agent, usually human, which can react to the
> instructions and carry out the computations.
Again, no disagreement here.
> *3. There are facilities for making, storing, and retrieving steps in
> a computation.
I'm not quite sure what it means to "retrieve steps in a computation".
But given that in the worst case, a poor student with a mountain of ink
and paper could simply write down the state of the machine (entaglement
and all) starting from the initial state, and again after each
application of a step of the algorithm (a unitary transformation), I'm
going to guess that again, quantum computing doesn't disagree with this
point. On the other hand, if this point means "I can at any time take a
perfect "snapshot" of my computation (retrieving), go on a three week
holiday (storing), and resume my computation on an entirely different
machine (making)", then of course this is _not_ true for quantum
computation as one can't clone the state of a quantum computer to begin
with. Once you embark on a (physically real) quantum computation, the
state is "in there somewhere", but you can't pull it out. But I
reiterate, this doesn't prohibit you "doing quantum computation" using
(perfectly classical!) pen and paper.
> *4. Let P be a set of instructions as in *1 and L be a computing agent
> as in *2. The L reacts to P in such a way that, for any given
> input, the computation is carried out in a discrete stepwise
> fashion, without use of continuous methods or analogue devices.
Quantum computation does not require the use of continuous methods or
analog devices. This point is absolutely vital, lost on a significant
number of people in the field, and of course lost on the people who
claim that (their perverted or simply misguided interpretation of)
quantum computation can solve e.g. the halting problem.
While it is true that the state of a quantum computer is usually
considered as a vector in a Hilbert space where the length of this
vector in each direction is specified by reals (subject to the complex
superposition constraint), _the fact that these numbers are real is
unimportant_. This is of great interest to those trying to build a
quantum computer because it is not physically reasonable to construct a
device that can perform e.g. a rotation of a nuclear spin by arbitrary
_real-valued_ angles. The idea is that even though I'm using an analog
device to send you this email (since any conventional computer can be
described by real-valued parameters (voltages, currents etc)), I am
"throwing away" this analog nature and using it in a strictly digital
manner. Actually, I'm not throwing away anything so much as trading the
analog behaviour (which, in the limit of zero error, would let my PC
solve the halting problem) for resilience to errors: if the voltage at
some point in my Pentium 4 is not sqrt(2) volts, but 2^0.501 volts, that
doesn't matter: these are both considered to be the logical value "1".
This is the essence of digital computing.
Now: Quantum computers are digital too. (!!!)
In the same way, a (sane) quantum computer trades off "analog paradise"
(e.g. the ability to rotate a spin by an arbitrary real-valued angle)
for "digital resilience" (being able to rotate a spin to within one part
in a billion is good enough.)
Let me make this last point even more clear:
An _arbitrary_ quantum computation can be achieved with unitary
transformations that are performed _only_ to a certain, finite level of
precision. (but this will, of course, require an increase in the number
In other words, there has been shown (quite rigorously) the existence of
a so-called "error threshold" for quantum computation, which, when
achieved experimentally, allows for the use of quantum error correction
techniques to execute arbitrarily large quantum computations.
More surprising, is that for some quantum algorithms (e.g. the quantum
Fourier transform, which is of particular interest), still yield
"correct answers" (when you're trying to find the period of a function
by using a QFT) when operations with surprisingly high error are
allowed. See Coppersmith's article on an "approximate fourier
(if that URL is broken, go to
http://domino.watson.ibm.com/library/CyberDig.nsf/Home and search for
"coppersmith AND fourier".)
Just a few weeks ago I was talking to Peter Shor about this point, and
he agreed that there is a rather conspicuous lack of a text that deals
with the question of "how much precision does a quantum algorithm
require." While the fact that it doesn't require infinite precision is
established, being able to pin down exact numbers would be of great
interest, both theoretical and practical.
Nevertheless, there is one paper on this topic which is required
reading: "Quantum Complexity Theory" by Bernstein and Vazirani:
http://www.cs.berkeley.edu/~vazirani/bv.ps . Here is an excerpt from the
introduction to whet your appetite:
"One important concern is whether quantum Turing Machines are really
analog devices, since they involve complex transition amplitudes. It is
instructive to examine the analogous question for probabilistic Turing
Machines. There, one might worry that probabilistic machines are not
discrete, and therefore not "reasonable", since they allow transition
probabilities to be real numbers. However, there is extensive work
showing that probabilistic computation can be carried out in a such a
way that it is so insensitive to the transition probabilities that they
can be allowed to vary arbitrarily in a large range [34, 44, 47]. In
this paper, we show in a similar sense, that quantum Turing Machines are
discrete devices: the transition amplitudes need only be accurate to
O(log T) bits of precision to support T steps of computation."
> *5. L reacts to P in such a way that a computation is carried forward
> deterministically, without resort to random methods or devices;
> e.g., dice.
Not many people deny that, in light of quantum mechanics, the universe
is fundamentally nondeterministic (though I'm one of them), but note
that you can always "push" the nondeterminism in a quantum computation
right to the end (the measurement stage). Going back to the student with
pen and paper, he can simulate a quantum computer with his classical
instruments, writing down (perhaps) an enormous sequence of matrices
which, when multiplied, yield some quantum computation. At the end of
all this, he gets, for every possible (classical) state of the observed
quantum computer, a probability that it will indeed be observed in that
state. If you want to say that quantum computation requires random
methods in that he must then find a coin, and flip it, to announce to
the world "Aha! I have finished my quantum computation, and the answer
was 42." then okay, quantum computation requires random methods.
Certainly, the theory says that if you build even a tiny quantum
computer, you can use this as a source of perfectly random bits. But
quantum algorithms are deterministic in the sense that they're not of
the form "perform step 1, and step 2, then flip a coin and if it comes
up heads proceed to step 3, otherwise skip to step 4".
> I had *3, *4, and *5 in mind in asking the question. *3 is
> suspect due to entanglement and the measurement side effects.
> *4 is suspect due to the "continuous methods or analogue
> devices". *5 is suspect due to "dice".
I hope my rambling shed some light on these issues!
> Rogers then goes on to ask five questions
> *6. Is there to be fixed finite bound on th size of the inputs?
> *7. Is there to be a fixed finite bound on the size of the set of
> *8. Is there to be a fixed finite bound on the amount of "memory"
> storage available?
> *9. Is there to be, in any sense, a fixed finite bound on the capacity
> or ability of the computing agent?
> *10. Is there to be, in any way, a bound on the length of a
> computation? More specifically, should we require that the length
> of a particular computation be always less than a value that is
> "easily computable" from the input and from the set of
> instructions P? To put it more informally, should we require that,
> given any input and given any P, we have some idea, "ahead of
> time", of how long the computation will take?
> Here, I would say *8 and *9 are suspect.
I don't think *8 is suspect. Even if one could perfectly clone the state
of a quantum computer, nobody is saying that you could store arbitrarily
large amounts of information in them.
For *9, all I can say is that any sane model of quantum computing (or
any other physical procedure, for that matter) assumes that the agent
can work only with finite precision. That is, he can only prepare and
modify the state of his quantum computer to a finite level of precision,
and when he measures it, he retrieves a finite sequence of classical
*10 is of some interest because one doesn't (usually) want to measure a
quantum computer (indeed, expose it to the "outside world" at all) until
it has finished its work. So, it's particularly important to know how
long a quantum computation will take (and of course quantum computation
would be pointless if calculating this value was difficult). This isn't
More information about the FOM