[FOM] 614: Adventures in Formalization 1
Dominic Mulligan
dominic.p.mulligan at googlemail.com
Tue Sep 15 05:30:43 EDT 2015
Dear Harvey,
Some comments and pointers to work you may find interesting:
> Now for the negative. The Theorem Proving infrastructure is, as far as
> I can tell, not close to ABSOLUTE INFALLIBILITY. Or at least not
> nearly as close to it as we can make it. And making it much closer is
> a missed opportunity that is going to be very interesting,
> philosophically, foundationally, practically, and theoretically, and
> even emotionally.
There is ongoing work in improving the infalibility of proof
assistants on several fronts. The most prominent is perhaps the work
of Kumar et al. on formalising the semantics and soundness of an
implementation of HOL in CakeML, a dialect of the ML programming
language that has itself had its semantics and a compiler
implementation verified [1]. Previous work along these lines was
carried out by Harrison in [2]. Anand and Rahli have ongoing work
formalising the metatheory of Computational Type Theory, the type
theory implemented in the NuPRL system [3]. Work on proving the
correctness of parsers (in order to avoid a so-called `Pollack
inconsistency') is also progressing. Further, inordinate amounts of
effort in computer science, past and ongoing, have been dedicated to
the wider hardware and software verification effort, ensuring the
underlying hardware and systems software that theorem provers sit atop
behave correctly. Your question later about cosmic rays has also been
considered [4]!
> Take a look at the statements of theorems in
> http://www.cs.ru.nl/~freek/100/ They are all unconscionably ugly to
> read and digest for a typical person outside the Theorem Proving
> community. In the Theorem Proving community, they are probably looked
> at as breathtaking works of art.
>
> Now this is something that is a missed opportunity and CAN DEFINITELY
> BE LARGELY FIXED. Is it worth fixing? ABSOLUTELY, but probably no one
> in the Theorem Proving community thinks it is worth their time to fix
> it, and should have a low priority. Conventional wisdom is often very
> wrong.
I think this is overstating things. Looking at the field, there's
plenty of ongoing work in bringing fully formal mathematics closer to
informal mathematics, to the extent possible. Certainly, I don't
think anybody considers a typical tactic-driven proof or a typical
statement in e.g. Coq or Isabelle as a `breathtaking work of art'.
Indeed, there's a dedicated workshop for user interface design in
theorem proving [5] where issues such as these and solutions are
discussed and presented. Some notes:
1. At least for the Isabelle statements on Freek's webpage, many of
them are written in the `old style', with explicit usage of meta-level
quantifiers and connectives, rather than the more familiar and clearer
`fixes-assumes-shows' style favoured today. Further, recent releases
of Isabelle are able to present inferred types and other information
as tooltips that pop-up inside the Isabelle IDE, presenting the user
much more information than perhaps plain HTML is able to put across.
2. Within the recent past, Isabelle *proofs* have transformed from
an inscrutable series of invocations of low level inferences and
applications of decision procedures to a more structured, high level
approach based on the Isar language. It's now much easier to follow a
proof for somebody who is not an expert in theorem proving technology
(though still not perfect) and consequently convince yourself that the
statement and proof are correct.
3. The Matita proof assistant experimented with type-directed
disambiguation of names to improve the clarity of statements, to
reduce the amount of user-driven disambiguation, need for separate
namespaces and other clutter. See [6], for details.
4. Several systems have experimented with ad hoc or regimented
systems of coercions to reduce the amount of `clutter' necessary when
converting a value of one type to a value of another. Matita is one
such system that has a fairly advanced coercion system [7], for
example. Coq and Isabelle also have similar facilities.
4. Kamereddine et al. at least recently (and maybe ongoing)
investigated using natural language as the primary input method to
proof assistants [8]. Proofs in their prototype could be written in a
(fairly regimented) subset of English and automatically translated
into Isabelle. The HELM project [9] investigated the other direction,
moving from a proof term of the Calculus of Inductive Constructions to
a natural language summary of the proof it represented.
Certainly, there's a lot of active work in this area, and it's wrong
to suggest that anybody is of the view that the current state of the
art cannot be improved massively.
[1]: https://cakeml.org/itp14.pdf
[2]: http://link.springer.com/chapter/10.1007%2F11814771_17
[3]: http://www.nuprl.org/documents/Anand/TowardsAFormallyVerifiedProofAssistant.pdf
[4]: https://www.cs.princeton.edu/~dpw/papers/lambdazap-icfp06.pdf
[5]: http://www.informatik.uni-bremen.de/uitp/
[6]: https://upsilon.cc/~zack/research/publications/matita.pdf
[7]: http://matita.cs.unibo.it/PAPERS/nonunifcoerc.pdf
[8]: http://www.macs.hw.ac.uk/~fairouz/forest/papers/conference-publications/mkm07-linz-tsa.pdf
[9]: http://www.cs.unibo.it/~asperti/PAPERS/lncs_2152.pdf
Dominic
More information about the FOM
mailing list