FOM: Church-Turing hypothesis -- reply to Vorobey

Anatoly Vorobey mellon at pobox.com
Thu Nov 19 23:07:38 EST 1998


[my sincere apologies for a large delay in replying. To recreate the
context in a few words: I challenge Joe Shipman's claim that the
question of recursiveness of physical systems is independent of the
question of recursiveness of abstract algorithms, or effectively
computable functions, and argue that a mathematical model of nonrecursive
behavior that is perceivable by humans as a kind of effective computation
is needed in order to meaningfully work with the notion of nonrecursiveness
in physical systems.]

I wrote:

>> I beg to differ. This isn't at all the way how we normally prove that
>> computer solves a problem correctly. What we actually *do* is proving 
>> that an
>> algorithm* - a notion completely detached from the physical world and 
>> current
>> physical theories - solves the problem correctly.

Joe Shipman writes:

> Yes, but we still need to know that our physical devices really 
> implement that abstract notion.

Yes, but the catch is, we don't need to know that *with mathematical 
certainty*.

In fact, we never do know that with computers. What if there is a fault
in hardware? How do we guard against that? We use more computers
independently executing the same algorithm. What if there's a problem in
all of them? 

We *are* reasonably sure that the computer "works right"; that is, that
it implements correctly the abstract algorithms we wish it to implement.
But our trust in that isn't at all like our trust in a mathematical
truth, and is more like our trust in that the sun will rise tomorrow. 
It has more to do with philosophical induction than with whatever we
percieve as mathematical certainty. 

As an instructive example, you might recall a story of a Pentium processor
floating-point math bug, discovered and trumpeted in media a few years ago.
The CPU incorrectly performed some division tasks. Did anyone thought
to ask back then, "Maybe the computer *is* dividing correctly and it's
our mathematicians that are doing it wrong?". No - it was clear to
everyone from the outset that the fault lies in the computer's 
implementation of the abstract division algorithm, not in the
algorithm itself. Mathematical certainty beats physical trust every time.

Yet another case in point is the four-color theorem (or should I say
"conjecture"?) proof. Rota has some
interesting thought on it to offer in "Indiscrete Thoughts". He observes
that, far from "trusting the device" - i.e. trusting that the computer
has correctly performed the complex exhaustive algorithm and concluding
that the theorem holds and forgetting about it - mathematicians have been
searching for simpler and simpler variants of the algorithm, to gain
more trust in its physical realizations. And of course, the search for
a purely mathematica proof, which some would consider the only "true"
proof, has not been stopped. This story is of course interesting because
it says something about mathematicians' "proof aesthetics" - but also
because it verifies how different trust in a mathematical proof and
trust in a computer are. And that was a _classical_ computer, not a
quantum one!

I wrote:

>> Witness the steadily growing field of Program (and chipset) Verification in
>> CS. One usually describes the program (or chip design) to be verified in a
>> suitably simplified and poweful high-level programming language; then
>> one defines the desirable result the program should produce (or chip's
>> behavior) as an invariant, or a statement in a suitable kind of temporal 
>> logic,
>> etc. One doesn't rely on quantum electrodynamics (which is needed to make
>> the computer work), classical electrodynamics, etc. in the course of this
>> process. Physics is never relevant, mathematized or not.

Joe Shipman asks:

> But why do you believe the computer performs as advertised?  

Because I know how it works, and the underlying physical principles have
not yet been falsified (and have been experimentally verified a staggering
number of times). 

Moreover, regardless of why I believe in the workings of the physical
device, my point still stands that this belief is irrelevant to what 
we mean by "verifying" the algorithm. 

I argue that there are there are three separate stages to gaining trust 
in the idea that a particular device computes a particular function:

1. The device is perceived as a physical realization of an abstract
mathematical model of computing.

2. That model is shown, mathematically, to compute correctly the desired
function.

3. Physics, experiment, etc. provide enough evidence (which is still
always inconclusive [insert obligatory reference to the Popperian
framework here]) that the device is indeed implementing correctly the
mathematical model.

You are ignoring the first stage, while I think its importance cannot be
underestimated. In particular, think of usual computers: it is the first
stage which allows us to regard them as *computing devices*, and not just
as a lot of wires and microchips carrying electrical current all around.
It is the first stage which tells us that such-and-such values of voltage 
correspond to bit 0, and others correspond to bit 1, and so on. The first
stage is usually hidden from us because it's only relevant for a 
*designer* of a classical computer; but it's still there, and must be
explicitly dealt with when we desire to talk about computational processes
going on in *existing* physical systems.

The second stage is what we really mean by "verifying" an algorithm
or a program - what is done in practice in Program Verification, etc. 
It's this process that makes us able to say "we *proved* that the
program performs correctly". However - and this is my original point -
the second stage is impossible without the first one, and in particular,
it is impossible to carry out when you are faced with a black box which,
you are told, solves the halting problem, and you have no abstract
model of its working such that you can regard the black box as a physical
implementation of the model.

The third stage is always inconclusive, but that's rarely a problem. We can
never be completely sure of our physics, or be sure that the device was
built correctly, etc., but that holds for just about everything in life,
not just computers, and we somehow manage. Sometimes this lack of 
certainty is acknowledged explicitly. For instance, researchers in
cryptology are fond of pointing out that their probabilistic algorithms
(e.g. primality checking) can be iterated so as to make the probability
of an error less than the probability of hardware malfunctioning (or
a lightning striking the computer). While I think that speaking of
a probability of a lightning striking the computer as a well-defined
and useful entity betrays a certain kind of naivete in matters of probability,
the point is that the imperfections of actual hardware (and the universe
in general) are acknowledged, and all of computer science doesn't suddenly
collapse due to them.

Consider yet another example, out of the multitude of possibilities. Suppose
that I ask you to multiply, using a pencil and a piece of paper, 
2023126123823147234 and 21324782349123478294823948234. It would take quite
a while, but you can certainly do it. Will you be completely certain in
your result? Not really, I hope: it's clear that mistakes can happen in
such a long and boring computation. However, the fact that your
result may be wrong does not, will not cause you to doubt the correctness
of the algorithm for multiplication that you used. The physical system -
you - can make mistakes, but the algorithm of multiplication can be
formally proved correct, and it does *not* make mistakes. The reason you
are sure that there exists a "right" answer - the "right" value of the
product - and that such a "right" answer can _in principle_ be reached
by you using pencil and paper, is because you can view your activity
as a physical realization of a provably faultless mathematical model.

Joe Shipman:

> Either because it is
> pragmatically seen to work, or because you believe the physics that 
> went into its design
> and construction, and both these considerations could also be relevant 
> for nonrecursive
> physical systems.

Yes, but, as I argue above, both these considerations do not let me 
conclude mathematically that a device in front of me solves the halting
problem. Only if I can view the device as realizing an abstract notion
of computing (which of course can be different from recursiveness)
and manage to prove mathematically that such a notion allows one to
solve the halting problem, can I reach such a conclusion.

I wrote:

>> A "proof" of correctness which relies on current physical theories will not
>> satisfy *mathematicians*. They would perceive it as lacking the kind of
>> absoluteness one expects from a proof in mathematics. The first thing they
>> would ask would be, "What is the current physical theories are incorrect?".

Joe Shipman writes:
 
> Of course you are correct here, because we have the notion of algorithm 
> and accept "CT1".  But all that tells us is that a person could 
> IN PRINCIPLE solve the problem "by
> hand" using an algorithm.  If you look at a calculation that needs to 
> be done by a real
> computer, for example the verification that a large integer is prime, 
> you really only
> believe the computer because1) it has always worked before, and
> 2) you have a THEORY of WHY it (considered as a physical system) 
> actually does implement
> the abstract notion of algorithm.

*And* because I can prove mathematically that the algorithm does indeed
solve the given problem. Note, anyhow, that you are actually agreeing
with me here, by saying "... actually does implement the abstract notion
of algorithm". That's exactly what I've been saying - that you need
to have an abstract notion of algorithm to match against your hypothetical
halting problem oracle.

I wrote, about a hypothetical physical "black box" which appears to solve
the halting problem:

>> But you can't actually *verify* that the device solves the halting problem.
>> You may have lots of physical evidence (which, by its nature, is never
>> conclusive) for the particular physical theory
>> which lets you conclude that the device "computes" a nonrecursive function,
>> but you cannot confirm that result *by working with the device*. You cannot
>> prove *mathematically* that the device computes a nonrecursive function; 
>> you cannot confirm that experimentally; the device doesn't "talk back" 
>> to you - it doesn't make new predictions useful for confirming or 
>> refuting the underlying physical theory. Also, the assertion that it 
>> computes a nonrecursive function is nonfalsifiable - and this makes 
>> its usefulness dubious, especially
>> in the Popperian scientific framework.

Joe Shipman writes:

> I claim that this is no different from the position you are in with 
> respect to a real
> computer calculation that is too large to be feasibly verified "by hand".

I disagree emphatically. The *crucial* difference is that you have a working
notion of algorithm in the latter case, and can prove (or believe on the
basis of your mathematical intuition) that the algorithm does indeed solve
the problem. This is very different from your hypothetical black box.

There are many other differences as well; note especially the very peculiar
phenomenon of nonfalsifiability in the case of a physical
halting problem oracle - you can't refute, even _in principle_, the 
assertion that it solves the halting problem, while you certainly are
able to do that (in principle) in the case of a large classical 
computation.

I wrote:

>> Imagine that I let you work with a black box,
>> with one input string and one output string, which purports to solve the
>> halting problem. 
[...]
>> You cannot then prove experimentally, by working with the black box, 
>> that it doesn't solve the halting problem (to be able in general to do 
>> that, you would have to be able to recursively enumerate nonhalting 
>> machines to find the one it misidentifies as halting - i.e. to be able 
>> yourself to solve the halting problem).

>> And now, imagine the same black box as in the preceding paragraph, 
>> only this time you *don't know* that it's an algorithm in the classical 
>> sense. Does the situation look familiar?

Joe Shipman:

> But this is already the situation we have with respect to 
> "quantum computers".  A quantum algorithm cannot be simulated "by hand" 
> in the same way a classical algorithm can but
> because we believe the physics involved in their design we are willing 
> to say, for example, that "factoring integers can be done by a quantum 
> computer in polynomial time" (Shor, 1994).

This isn't, in fact, the case. The currently acceptable models of quantum
computers (either quantum Turing machines or quantum circuits) can be
simulated by their classical siblings with at most exponential speedup.

But the more important fault in your argument above lies, in my opinion,
in your acceptance of "quantum computers" as examples of physical devices
that may teach us new things about computability. In fact, the situation,
in my opinion, is exactly the other way around. One shouldn't forget the
simple fact that we don't in fact *have* quantum computers as physical
devices; quantum computing has been progressing, since the works of
Deutsch in the 80ies, as a theoretical discipline. A quantum Turing
machine, for instance, provides for us exactly the theoretical notion
of algorithm that I claim is needed for recognizing a physical process
as a kind of computation. A quantum Turing machine is completely detached
from reality; sure, it has its origin in quantum mechanics, but the model
itself is completely mathematical - that fact is easily emphasized by
stressing that we don't yet actually have any physical implementation 
of that model. So, in quantum computing, we *start* from Stage 1 in 
my terms above, and then proceed to prove fascinating results in 
Stage 2 - e.g. Shor's quantum algorithm of factoring integers in 
polynomial time (which, I must hasten to add, is still essentially
probabilistic - the algorithm will output a factor with large
probability - and thus not necessarily relevant to issues of computability
in principle). We don't have Stage 3 yet since we don't have physical
quantum computers and aren't even completely sure we can ever build them.

Moreover, that theoretical notion of algorithm used in currently accepted
quantum computing models is not new - it coincides with recursiveness
(in fact, the harder direction is surprisingly to prove that every recursive 
function can be computed on a QTM - one must use the work of Bennett to 
argue that the function can be computed reversibly, which is a necessary
condition to its being computable on a QTM). It only provides us with 
a large (hopefully, exponential) speedup which is due to the fact that
the Nature "remembers" an arbitrarily large linear combination of basic
quantum states, and "applies" a quantum transformation to all of them
at once. The Nature works in these models as a very powerful parallelizer, 
but this fails to create any nonrecursive behavior.

Do you think, by the way, that the fascinating questions of quantum
computing deserve a separate thread of their own?

I wrote:

>> It is not inconceivable that there are physical phenomena which "compute"
>> nonrecursive functions. However, in order for us, rational human beings, to
>> regard this physical behavior as computation (and not as "computation") and
>> to trust its results we need to be able to view it as a realization of some
>> abstract, intuitively appealing notion of algorithm/computation, just as we
>> view physical computers as realizing our various abstract notions of
>> algorithms.

Joe Shipman:

> By the same argument, quantum computers "do not compute".

Not at all! Quantum computers "compute" exactly because we have an
abstract mathematical model of "how" they compute. In the usual model
of a classical Turing machine, for instance, we don't ask ourselves "how"
the machine moves the tape from one cell to another, or writes a symbol
on the tape. These are "physical" questions which are irrelevant to the
model. In the same way, in QTMs, we just postulate that the machine's
state changes by aplying local unitary transformations without
specifying "how" that happens physically. We still get a model of
"computation" and one that, in fact, coincides with recursiveness.

Joe Shipman:

> Yes, but I'm contemplating a different situation where an independently
> conceived and verified physical theory is used as the basis for the 
> technology used to construct the
> device.

And I still claim that you need an abstract model of "algorithm" which
does not coincide with recursiveness in order to meaningfully claim
(let alone verify) that the device solves the halting problem.

[about versions of Church's Thesis: C_T_1 refers to recursiveness of
any abstract notion of algorithm which humans perceive as effective, and
C_T_2 refers to recursiveness of behavior of physical systems]

> > The problem, as I tried to describe it, is that you really need C_T_1
> > in order to meaningfully talk about perceiving, verifying, refuting,
> > and otherwise working with C_T_2.

> I disagree with this; C_T_2 states that no physical device exhibits 
> behavior of a certain type and can be formalized independently of the 
> informal notion of "algorithm".  

Formalized, maybe. It's not hard to formalize the statement "this device
solves the halting problem". But meaningfully worked with - no; I refer you to
my large paragraph above which describes what you can't do (pretty
much anything useful) with a hypothetical black box which you suspect
solves the halting problem. 

> Steve Simpson will correct me if I'm wrong here, but I think the 
> identification of the primitive recursive functions as effectively 
> computable goes back to Hilbert and predates the definition of general 
> recursive functions; it would have been possible to formulate a C_T_1 
> and a C_T_2 with respect to *that* notion of computation, and those 
> could have been *independently* refuted (one could build a device 
> to calculate a non-p.r. function and prove from physical considerations 
> that it did so, while independently observing that the
> same function could be calculated "by hand").

Yes, but their refutation would have to rest on a notion of "algorithm"
which does not coincide with primitive recursiveness - be it the
formalizable "algorithm as a Turing machine" or intuitive "algorithm
as something we can calculate by hand". 

Sincerely yours,
Anatoly.

-- 
Anatoly Vorobey,
mellon at pobox.com http://pobox.com/~mellon/
"Angels can fly because they take themselves lightly" - G.K.Chesterton




More information about the FOM mailing list