[FOM] Formal verification
Dominic Mulligan
dominic.p.mulligan at googlemail.com
Wed Oct 22 05:24:33 EDT 2014
> It always seemed to me naively that it ought to be possible to overcome
this obstacle, provided that the right language is around. If one is
willing to work with "conditional" verification - which would only require
the formalization of the theorems to be used - then presumably it would be
possible to formalize results, and fill in the missing links later?
This is certainly possible. Though, a big problem with this approach is
that you would a) still need to invest substantial amounts of time and
effort to do this and b) run the risk of not getting anything publishable
out of it, as reviewers are wary of formal proofs with gaping holes or
axioms (no matter how seemingly benign) in them, at least in my experience.
> This to me seems like a much more serious issue. Incompatible languages
would seem to not only cause a duplication of efforts, but also may cause
people (like myself) to hesitate about investing the time required to use
one of these. Is there a serious effort towards achieving a translation
between the different systems, or at least building a system in which the
current systems can both be interpreted?
I think more needs to be said here. There's three classes of proof
assistant in widespread use with computer scientists today:
1. Those based on dependent type theory (Agda, Coq, Matita, NuPrl, PVS,
etc.)
2. Those based on HOL (HOL4, HOL Light, Isabelle/HOL, etc.)
3. Sui generis systems like ACL2.
Within the second class there's a lot of conformity, as they're essentially
all implementations of the same logic (Gordon's HOL) with minor
differences. As a result, there has been some effort to export definitions
and proofs between the systems, and also to use automated proof tools
implemented in one system for closing proof obligations in another system,
for example in Hurd and Kumar's OpenTheory tool. These systems will also
be more familiar for mainstream mathematicians: equality is extensional,
the axiom of choice is no problem, excluded middle is no problem, there's a
relatively unobtrusive type system, and so on.
The third class are out on a limb, as they're radically different to the
rest. ACL2 is essentially a pure subset of the Lisp programming language
with a host of decision procedures and heuristics implemented for
(semi-automatically) proving properties of Lisp programs. The logic is
untyped, and there are no higher-order functions, which makes it very
different to both the HOLs and the type theory based systems which use
typed higher-order logics. However, there has been attempts to try to
translate back and forth between HOL-based systems and ACL2 in the past,
for example with the ACL2PII tool, but I'm not aware of how current these
are or whether they have bitrotted (though my impression from Mike Gordon
is that they're not in current use, and new efforts are underway to try to
link HOL4 and ACL2).
The first class have some substantial differences between them, in whether
they are predicative (Agda) vs. impredicative (Coq, Matita), whether they
are extensional (NuPrl) vs. intensional (the rest), whether they have
explicit universe indices (Matita), typical ambiguity for universes (Coq)
or explicit universe polymorphism (Agda), whether proof terms are
constructed by hand (Agda) or whether they use a tactic language to
construct proofs (the rest of them), whether they have a complex Standard
ML-style module system (Coq), a simple module system (Agda), or whether
they use type-directed name overloading resolution (Matita), and other
differences in the universe hierarchies (e.g. Matita has an additional
universe hierarchy that Coq does not), etc. etc.
As a result, it's hard to translate from the HOL systems into the type
theory based systems as there's so much variation in the latter class of
systems, HOL-based systems are classical whereas type theory based systems
are constructive by default, and also hard to translate between the type
theory based systems. However, Matita and Coq were, at one point at least,
similar enough to share proof terms between the systems making some sort of
communication of results and definitions possible, and Wiedijk has done
some work on embedding the HOL Light logic into Coq, making a translation
between the two feasible in theory (though to what extent it is practical
is debatable, as his translation caused a severe size increase, as I
understand it --- perhaps he can clarify).
Obviously, lots of people in the field see this state of affairs as a major
problem! Locally, here at Cambridge we've developed metalanguages (Lem,
L2, Sail) for writing our hardware/software models in that can then
translate them to multiple proof assistants. Much more work needs to be
done on this problem however, and for the foreseeable future translating
mathematics/formal models of software or hardware between the systems is
going to continue being a major pain.
Thanks,
Dominic
On 21 October 2014 22:31, Rempe-Gillen, Lasse <L.Rempe at liverpool.ac.uk>
wrote:
> Thank you all for the many interesting responses to my message. I would
> like to pick up on a few points.
>
> ---
> Dominic Mulligan wrote:
>
> > A lack of libraries is a major issue. It's hard to start formalising
> > research level mathematics when large swathes of undergraduate
> > mathematics is still missing from the libraries of the main proof
> > assistants. Further, what exists is coloured by the biases of those who
> > use the systems, mainly computer scientists: there are a lot of
> > formalisations of things like red black trees and typed lambda calculi,
> > for instance, whereas continuous mathematics seems to be neglected, or
> > at least it seems to me.
>
> It always seemed to me naively that it ought to be possible to overcome
> this obstacle, provided that the right language is around. If one is
> willing to work with "conditional" verification - which would only require
> the formalization of the theorems to be used - then presumably it would be
> possible to formalize results, and fill in the missing links later?
>
> I for one would also be quite happy to make every beginning postgraduate
> student of mine - and perhaps also project undergraduate students -
> formalize some part of existing mathematics, assuming that there is a
> system that is easy enough to learn and use to make this feasible. In my
> view there is a real benefit to really having to understand a proof on this
> detailed level.
>
> This makes me think that, if there is a suitable system and an agreed
> standard, the database of 'standard' mathematics that has been formalized
> might start to grow rather quickly. Am I being naive?
>
> > Further, libraries implemented in a system based on type theory like
> > Coq or Agda are useless if you use Isabelle or HOL4, systems based on
> > HOL. Even the statements of theorems and techniques used to formalise
> > e.g. matrices and linear algebra would need considerable work to port
> > from Coq or Agda to Isabelle due to the vastly different foundations of
> > the two groups of systems.
>
> This to me seems like a much more serious issue. Incompatible languages
> would seem to not only cause a duplication of efforts, but also may cause
> people (like myself) to hesitate about investing the time required to use
> one of these. Is there a serious effort towards achieving a translation
> between the different systems, or at least building a system in which the
> current systems can both be interpreted?
>
> ---
> Sam Sanders critiqued my contention that "that eventually all rigorous
> mathematical results should routinely come with a formal verification". (Of
> course, it is dangerous to make such absolute statements...) He also feels
> that advances in Artificial Intelligence would be necessary for my aim. To
> avoid unproductive discussions, let me try to isolate a key point.
>
> Let me use "mathematically rigorous" for a proof that is rigorous in the
> sense it is often taught in undergraduate mathematics, e.g. at the level of
> detail of an epsilon-delta proof, a completely worked-out proof by
> induction, etc.
>
> (Most published mathematical proofs are not written in this style. I
> believe - again perhaps naively - that, to claim a proof, I should at least
> be confident that I would be able to produce such a mathematically rigorous
> proof with a limited amount of extra effort, and without requiring new
> ideas. However, this is a somewhat separate issue from formal verification.)
>
> My feeling has been that it ought to be feasible to pass from a
> mathematically rigorous proof to a computer-verifiable one with a modest
> time commitment (i.e., comparable to that of achieving the mathematically
> rigorous proof in the first place). It would surprise me if this really
> required substantial advances in AI, rather than good language and system
> design in collaboration with working mathematicians. Is there something I
> am missing?
>
> ---
> Arnold Neumaier gives a very useful estimate on this final point:
>
> > Making a mathematically fully acceptable proof written in Latex
> > formally rigorous (i.e., acceptable to a proof assistent) has an
> > overhead factor of roughly 40.
>
> He also notes:
> > The main obstacle is the lack of mathematician-friendliness of all
> > available software systems, which repels most working mathematicians.
>
> Clearly a factor of 40 is prohibitive. Is it realistic to reduce this
> drastically (let us say, to a factor of 2 or 3) using current knowledge?
> ---
>
> Freek Wiedijk also comments on the differences between type theory and HOL
> systems, and the difficulty in overcoming them (see above).
>
> Regarding engagement from 'mainstream mathematicians', he says:
>
> > It would help if it became clear whether these people consider it
> > important to have full classical ZFC available or not (i.e., whether
> > they don't mind working in the formal world of Bishop's mathematics,
> > where you cannot prove the intermediate value theorem if you state it
> > the naive way, say.)
>
> To this I would answer that it would seem unlikely that many
> mathematicians (myself included) would want to work in a system where we
> cannot use the usual version of the intermediate value theorem ... :)
>
> > Currently doing formal proof goes too slowly to be really pleasant.
> > The hammers -- like Isabelle's Sledgehammer -- help, but maybe not
> > enough yet. You still need to be aware of a lot of details that you
> > normally would leave implicit.
> > At least, I think.
>
> Perhaps this is really the key point, and it is hard for me to understand
> exactly what the overhead is, without getting my hands dirty and actually
> doing it. Here is a very simple statement which I often give to students as
> a first exercise in iteration, and to practice formal mathematics.
>
> ***
> Let f be a real-valued function on the real line, such that f(x)>x for all
> x.
> Let x_0 be a real number, and define the sequence (x_n) recursively by
> x_{n+1} := f(x_n). Then x_n diverges to infinity.
> ***
>
> A standard proof might go along the following steps: 1) By assumption, the
> sequence is strictly increasing; 2) hence the sequence either diverges to
> infinity or has a finite limit; 3) by continuity, any finite limit would
> have to be a fixed point of f, hence the latter cannot occur.
>
> What effort would be required to formalize this kind of statement using
> current technology?
>
> Best wishes,
> Lasse
> _______________________________________________
> FOM mailing list
> FOM at cs.nyu.edu
> http://www.cs.nyu.edu/mailman/listinfo/fom
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: </pipermail/fom/attachments/20141022/27ac332c/attachment-0001.html>
More information about the FOM
mailing list