[FOM] Formally verifying "checkers is a draw"
Timothy Y. Chow
tchow at alum.mit.edu
Sun Aug 3 21:54:22 EDT 2014
Thanks to all who responded. I learned several interesting things from
this discussion.
1. By email, someone told me that Flyspeck is in the home stretch and is
estimated to take about 10,000 CPU hours.
https://www.youtube.com/watch?v=DJx8bFQbHsA&index=3&list=PL9kd4mpdvWcCUOrG5S7KrvOFDhyqnXoUj
(at approx 33 min).
Someone else emailed me with a pointer to another highly computational
proof, related to the Erdos discrepancy problem.
http://arxiv.org/pdf/1402.2184v1.pdf
Near the end of the paper, the authors suggest that the proof "is probably
one of the longest proofs of a non-trivial mathematical result ever
produced." A quick glance suggests that it is somewhat smaller than
"checkers is a draw," but perhaps the authors would not consider the
latter a "non-trivial mathematical result."
2. The fact that checkers has a PSPACE flavor is probably not directly
relevant, but it *does* seem to be relevant that there are interesting
mathematical theorems whose proofs do not seem to fall into the "NP
paradigm." By the "NP paradigm," I mean the idea that the proof of an
interesting mathematical theorem might take a lot of effort to discover,
but the verification of the proof (once discovered) should take
exponentially less effort. Something like "checkers is a draw" or
"such-and-such a combinatorial structure does *not* exist" seem to have
the property that the effort involved in verifying a proof is comparable
to the effort required to discover it. The effort may be reduced somewhat
by storing a (partial or full) certificate, but it will still be laborious
because there may not exist an exponentially succinct certificate.
What this fact suggests to me is this. We can expect the database of
formally verified mathematics to keep growing in the near future. If
"co-NP-hard" results proliferate, then there will be strong pressure to
structure the database in such a way that the results will *not* be
re-verified every time the database is loaded up, because that will just
be too computationally expensive. No matter how this is done, it seems to
me that the door is opened to new errors creeping in. For example, a
theorem might be marked as proved, and then later accidentally altered.
If it is not re-checked, the error could sit around for a long time
undetected. Even if the core of the proof assistant is perfectly correct,
and executes perfectly every time it is run, the database could still
accumulate errors in this manner.
Perhaps all this is obvious to the experts, but as an interested
bystander, I have gotten the impression that formally verified mathematics
is sometimes touted as satisfactorily solving the problem of accumulated
errors in the corpus of known mathematics. There is no doubt that the
project of formally verifying mathematics, if successfully carried out,
will dramatically reduce the occurrence of erroneous proofs. However, if
my observations above are correct, and co-NP-hard results proliferate,
then the problem of errors creeping in undetected because nobody wants to
expend the effort to check them again will not entirely go away.
Tim
More information about the FOM
mailing list