[FOM] Devlin on incorrect proofs

Neil Tennant neilt at mercutio.cohums.ohio-state.edu
Mon Jun 2 09:52:21 EDT 2003


On Sun, 1 Jun 2003 JoeShipman at aol.com wrote:

> ... I find Devlin's insouciance to the phenomonon of incorrect published
> proofs (his essay is ironically titled "The shame of it") quite
> striking.           

Joe,

Thanks for the URL. Devlin ends by saying

"Is there anything for mathematicians to be ashamed of, as I jokingly
began? Only if it is shameful to push the limits of human mental ability,
producing arguments that are so intricate that it can take the world's
best experts weeks or months to decide if they are correct or not. As they
say in Silicon Valley, where I live, if you haven't failed recently,
you're not trying hard enough. No, I am not ashamed. I'm proud to be part
of a profession that does not hesitate to tackle the hardest challenges,
and does not hesitate to applaud those brave individuals who strive to
reach the highest peaks, who stumble just short of the summit, but perhaps
discover an entire new mountain range in the process."

Isn't Devlin just underscoring human proneness to error in *everything* we
undertake, whether individually or collectively---even in mathematics?
After all, finding proofs is just another human activity, and is itself
(like the subsequent process of checking a purported proof) prone to
error in execution.

We might be able to raise our confidence level in a checked proof if it
were written in a precise enough symbolism to be checked by a computer
program. But then wouldn't some small margin of potential error still
remain in the supposed correctness of the program itself?---either in the
specification of the algorithm, or in its implementation, or in the
physical functioning of the computer? I would like to know from members of
this list who are in a position to know, whether there are any theoretical
results about confidence-levels in automated proof-checking that cannot be
exceeded (perhaps as a function of the length of input, i.e. length of
proof to be checked for correctness).

The main human factor that Devlin says enough for the reader to identify,
but (surprisingly) dwells on not at all, is the fact that the Clay
Institute has offered a million dollars for a solution to Poincare's
Conjecture. Is it any wonder if a mathematician "morally quite certain" of
his/her proof jumps the gun and claims to have solved the problem? This
would, after all, probably work as a huge disincentive to others racing
for the same prize. Such a human strategy still makes sense even under the
conditions of the prize (quoting from Devlin's piece):

"the Clay Ins[t]itute will not award the $1 million prize for a solution
to any of the Millennium Problems until at least one year has elapsed
after the solution has (i) been submitted to a mathematics journal, (ii)
survived the refereeing process for publication, and (iii) actually
appeared in print for the whole world to scrutinize."

Overlooking the fact that mathematicians, like human beings generally, can
be venial, is to make a mistake (with regard to ensuring that a particular
intellectual goal is honestly achieved) rather like the mistake of the
designers of the safety-monitoring devices in the control-room at Three
Mile Island. There, the goal was to ensure that a certain level of
awareness of potentially dangerous operating conditions would be achieved
by the human beings monitoring the computer-screens displaying 
readouts from various measuring devices. What the designers overlooked was
the basic human need---even among computer geeks---to chat with one's
neighbors.

One wonders whether anyone at the Clay Institute, in offering the
million-dollar prizes, knew of Jacobi's words: "the glory of the human
spirit is the sole aim of all science".

Neil Tennant



More information about the FOM mailing list