[FOM] Formal verification
croux at andrew.cmu.edu
croux at andrew.cmu.edu
Wed Oct 22 10:26:52 EDT 2014
> It always seemed to me naively that it ought to be possible to overcome
> this obstacle, provided that the right language is around. If one is
> willing to work with "conditional" verification - which would only require
> the formalization of the theorems to be used - then presumably it would be
> possible to formalize results, and fill in the missing links later?
>
> I for one would also be quite happy to make every beginning postgraduate
> student of mine - and perhaps also project undergraduate students -
> formalize some part of existing mathematics, assuming that there is a
> system that is easy enough to learn and use to make this feasible. In my
> view there is a real benefit to really having to understand a proof on
> this detailed level.
>
> This makes me think that, if there is a suitable system and an agreed
> standard, the database of 'standard' mathematics that has been formalized
> might start to grow rather quickly. Am I being naive?
>
A bit, I'm afraid. There has been non-trivial effort in this direction,
and I'm hopeful that great progress will be made in the next few years,
but it takes significant time and engineering effort to get something
usable. Undergraduates have neither the time nor the expertise to develop
this kind of large scale project. The most ambitious in my opinion is the
Mathematical Components formalization of undergraduate (and parts of
graduate) algebra
http://ssr.msr-inria.inria.fr/
I think you'll find that even reading these files is non-trivial for a
non-specialist. Another notable example is C-Corn
https://github.com/c-corn/corn
Though I think my bias towards Coq is showing.
>> Further, libraries implemented in a system based on type theory like
>> Coq or Agda are useless if you use Isabelle or HOL4, systems based on
>> HOL. Even the statements of theorems and techniques used to formalise
>> e.g. matrices and linear algebra would need considerable work to port
>> from Coq or Agda to Isabelle due to the vastly different foundations of
>> the two groups of systems.
>
> This to me seems like a much more serious issue. Incompatible languages
> would seem to not only cause a duplication of efforts, but also may cause
> people (like myself) to hesitate about investing the time required to use
> one of these. Is there a serious effort towards achieving a translation
> between the different systems, or at least building a system in which the
> current systems can both be interpreted?
>
Some effort has been made in this direction, but it's not such a big issue
*in theory*. HOL is pretty much a strict subset of Coq + EM and so is Agda
(sort of). However *in practice* translating theorems from one system to
another involves stupendous engineering effort. Some of it has been
accomplished:
http://prosecco.gforge.inria.fr/personal/ckeller/Recherche/hollightcoq.html
This "possible in theory, hard in practice" issue is omnipresent in
software. It's as much a money problem than anything else.
> ---
>
> Let me use "mathematically rigorous" for a proof that is rigorous in the
> sense it is often taught in undergraduate mathematics, e.g. at the level
> of detail of an epsilon-delta proof, a completely worked-out proof by
> induction, etc.
>
>
> My feeling has been that it ought to be feasible to pass from a
> mathematically rigorous proof to a computer-verifiable one with a modest
> time commitment (i.e., comparable to that of achieving the mathematically
> rigorous proof in the first place). It would surprise me if this really
> required substantial advances in AI, rather than good language and system
> design in collaboration with working mathematicians. Is there something I
> am missing?
>
The issue is that there is another order of magnitude in rigor that is
missing from "rigorous" to "formal". I'll try to detail this point with
the example you give at the end. To fill that gap, you need software that
is capable of "filling in" the missing pieces, a surprisingly non-trivial
task. Honestly I'm not sure what AI means in this context; it's used as a
fill-in for "an algorithm that can do the work of a human".
> Clearly a factor of 40 is prohibitive. Is it realistic to reduce this
> drastically (let us say, to a factor of 2 or 3) using current knowledge?
> ---
>
A factor of 2-3 is feasible for a trained expert, as far as I understand
from people working on large projects like the Feid-Thompson formalization
and the Flyspec project. It is not for untrained mathematicians.
>
> Perhaps this is really the key point, and it is hard for me to understand
> exactly what the overhead is, without getting my hands dirty and actually
> doing it. Here is a very simple statement which I often give to students
> as a first exercise in iteration, and to practice formal mathematics.
>
Disclaimer: I haven't actually formalized this statement, and I'm just
using previous knowledge to anticipate the difficulties you would likely
encounter along the way.
> ***
> Let f be a real-valued function on the real line, such that f(x)>x for all
> x.
> Let x_0 be a real number, and define the sequence (x_n) recursively by
> x_{n+1} := f(x_n). Then x_n diverges to infinity.
> ***
>
> A standard proof might go along the following steps:
> 1) By assumption, the
> sequence is strictly increasing;
Actually, that is not immediate by assumption. You need to take a given n,
then apply the universal statement forall x, f(x) > x to x_{n+1}, then
probably use something along the lines of: x < y => y > x. I'm quibbling,
but this is the kind of detail you need to go into in a proof assistant.
> 2) hence the sequence either diverges to
> infinity or has a finite limit;
This is a theorem, and needs to be proven. Note that if you have a related
but not *identical* theorem (say, Bolzano-Weierstrass) then you need to
prove exactly this. It's non-trivial work.
> 3) by continuity, any finite limit would
> have to be a fixed point of f, hence the latter cannot occur.
>
Amusingly, you made a small mistake in your statement: you never assumed f
was continuous. This is the kind of error you uncover when formalizing a
proof.
More seriously, showing a finite limit would be a fixed point of f
involves a number of steps:
If x is a finite limit of x_n and f is continuous, lim f(x_n) = f(x). This
is a theorem that needs to be proven, yadda yadda.
By definition of x_n, f(x_n) = x_{n+1}, and *you are allowed to take
limits on both sides*. Again, this is a theorem that needs to be proven.
But wait; to get a contradiction, you need to show f(x) != x. All you have
is by hypothesis, f(x) > x. Surely, there is a theorem of the form, y > x
=> y != x, but you need to find it and apply it.
At this point, you've proven A\/B (f has a finite limit or diverges) and
!A (f doesn't have a finite limit). You can now conclude B, but *only
after applying the fact that (A\/B) /\ !A => B*. This is a trivial fact,
but it has to be explicitly stated, proven, and applied.
QED!
As you can see, the whole process is non-trivial, involves a lot of detail
on the proof process of the specific tool you are using and the amount of
pre-existing machinery (lemmas and tactics i.e. procedures that help you
do logical/algebraic manipulations). This is the problem we are trying to
solve in interactive theorem prover design, but it requires significant
effort and is often specific to the mathematical field.
Hope this clarifies some of the issues.
Cody
> What effort would be required to formalize this kind of statement using
> current technology?
>
> Best wishes,
> Lasse
> _______________________________________________
> FOM mailing list
> FOM at cs.nyu.edu
> http://www.cs.nyu.edu/mailman/listinfo/fom
>
More information about the FOM
mailing list