FOM: more on probabilistic proofs
sacook at cs.toronto.edu
Wed Sep 16 18:13:19 EDT 1998
Here are some brief responses to some of Joe's and Harvey's questions
about using the outcome of a probabilistic test (such as Rabin's
primality test) as a rigorous proof:
The first question concerns use of a physical source, such as noise
from background radiation, as a source of random bits for the test.
Should we be totally convinced by the result of a test using such
a source? It seems to me that this is more a question of physics than
of mathematics. One would have to see a clear and convincing
physical model for such a source, and a proof (assuming the model)
that the resulting bits were truly random. Even given this, there
would be important questions of implementation (e.g. was the source
contaminated by earth-generated radiation?).
Of course these doubts apply to any published table of random numbers.
I am no physicist, but it seems to me that the quality of natural
sources of random bits should be a fundamental question in physics.
Concerning the question of whether BPP = P, it is tempting to think
yes, simply because good pseudo-random number generators seem to work
perfectly well when implementing probabilistic algorithms. However,
it is quite possible that they work only for "most" inputs to the
algorithm. There still may be infinitely many "bad" inputs (e.g. infinitely
many composite numbers, in the case of Rabin's test) on which they fail.
And these bad inputs may be hard to find.
I don't think that there is any consensus among complexity theorists as to
whether BPP = P, but there is plenty of interest in the question.
There is a paper by Impagliazzo and Wigderson in STOC 97 proving the
following: If there exists a problem solvable in exponential time that
requires exponential size Boolean circuits, then BPP = P.
Finally, Harvey asks
"There are issues connected with "why should I believe a computer?" at the
hardware and software level, and how good "independent verification" is. Do
you regard such issues as nonissues?"
The answer is that I certainly agree that there are such issues, and some
of them have been discussed on fom, by Shipman and others. There are
also issues (pointed out on fom) about believing lengthy proofs not
Incidently, I think that there is a serious criticism of the Appel/Haken proof
that does not concern its use of computers. Appel and Haken address
this issue in the Mathematical Intelligencer (8,1), 1986. They ask
"How does one go about checking a 400-page unavoidability proof
for a set of 1478 configurations?" (This part was done without the aid
of computers.) They say that Ulrich Schmidt checked about 40 per cent
of the total unavoidability proof, and found "fourteen errors of degree 1
and one of degree 3". All errors were easily corrected, but apparently
no one has independently checked the whole proof, and one wonders if there
are errors which can't be corrected.
It is my understanding that the paper by Seymour et al (for which I don't
have a reference at the moment) addresses this issue by coming up with
a proof in which most of the drudge work can be (and is) checked by
computer. They argue that the increased reliance on computers makes
their proof more convincing. That seems right to me.
More information about the FOM