[FOM] 654: Theory Inspired by Automated Proving 1
hmflogic at gmail.com
Wed Feb 17 20:08:16 EST 2016
On Wed, Feb 17, 2016 at 11:27 AM, Lawrence Paulson <lp15 at cam.ac.uk> wrote:
> Type systems are indeed fundamental in the world of interactive theorem proving. Some of the issues are purely philosophical, such as whether it is right to regard a line as a set of points, or to consider -1 to be a certain (infinite) set. But there are a great many practical reasons why practitioners have preferred typed formalisms. The obvious reason is that the type checker performs a strong and automatic confirmation that a given expression is well-formed. But probably a more significant reason is that, in combination with syntactic overloading, we can achieve a notation that is both expressive and tolerably readable. We can at least use the same symbol (+) for natural numbers, integers, complex numbers and even formal polynomials and matrices, and perhaps more impressively, we can set up the basics of limits and continuity in the abstract setting of topological spaces and successively refine them to metric spaces, Euclidean spaces, etc, etc while using the same symbols for them and without having to duplicate mathematical arguments in these different settings.
> I’m only aware of two convincing implementations of set theory in proof assistants. One of them is Isabelle/ZF, which is my own work, and which provides a very nice environment for working in axiomatic set theory, but not for working in any other domain of mathematics. But notational issues cause problems very quickly. A textbook can use the same symbol (+) for both ordinal and cardinal addition, distinguishing the two by variable name conventions: this isn’t possible in a typeless framework. The system that gets it right is Mizar, which has an underlying set theory and a very elaborate and impressive soft type theory built above it. No one has managed to duplicate this, and Mizar itself, as far as I know, hasn’t been updated to be technologically competitive with modern systems.
> If people want a set theory-based system, it would make sense to start with Mizar. There is no point confining oneself to ZC: the axiom of replacement does not cause any problems at all.
My instincts from the beginning of this elaborate typed approach,
thinking that any advantages it has can be replicated more simply by
naturally sugaring ZFC or ZC along lines that are quite natural from
the point of view of classical foundations.
I then assumed that I must be wrong about this given the points of
view that Larry Paulson is expressing here, and more radical and
extreme points of view of some others. Also Jeremy Avigad agrees with
Larry Paulson on this, although he freely admits that there is what he
calls the "dark side" of type theory.
But recent significant contact with John Harrison, who has a very
different take, much closer to my instincts. Just look at the amazing
continued success of John Harrison, and he certainly knows much more
about involved type systems than I do, so it would be nice to hear
from him about this issue.
In the meantime, it may just be possible for me to get to the bottom
of this issue
Firstly, right now, I would like to talk about semantic
representation. This does not include, or I am not taking this to
include, verification. It is already a serious matter - as to how to
do it properly, maintaining maximum readability and writability. I
here I believe that any involved hard typing is going to be
unacceptable to the wider mathematical community. Furthermore, here my
instincts are yet stronger that it is not at all necessary, and causes
more problems than it solves.
One reason that my instinct is that complicated typing is not the way
to go is that I certainly do not write mathematics with strong typing,
and I am sensitive to formal correctness. Granted, I don't actually
write formally correct mathematics. But I am pretty conscious of the
partial senses in which I do and the senses in which I don't. So from
my point of view, fixing my usual mathematical writing so that it is
perfect, and having a uniform methodology for doing this, is just
another attainable challenge of a foundational nature.
On the other hand, when proofs are brought into the picture, it does
appear that having some elaborate hard typing does force the verifier
to take care of trivialities as first priority. If the verifier
doesn't put these priorities at a high level, then it is in danger of
being so inefficient that it can become worthless. This is an abstract
argument against my ;position of the kind that Jeremy Avigad often
makes with me.
I have two reactions. Firstly, there has to be some subtle
prioritization of tasks for the verifier anyway, so why not start the
subtleties earlier, incorporating what can e viewed as "shades of hard
typing" as a special case of a wider more generally conceived systems
of priorities for the verifier? So what amounts to "type checking"
would merely become a special case of a more general enforced set of
priorities, without any particular special status.
Secondly, and perhaps even more powerfully, John Harrison quite
happily avoids relying on involved hard typing, and does not miss its
absence. He seems to simply get very very very clear about the
mathematics semiformally, and then breaks everything down in very
natural ways, uniform throughout his corpus, and everything falls into
place in record time - all as a part time "hobby".
Coming back to something specific, I more or less decided to start
writing the content without proofs of the undergraduate math
curriculum in a semantically perfect way. All of the issues you raise
This issue of using the same symbols for different things is obviously
a key one. Mathematicians do not solve this by involved hard typing.
They solve this by a bare minimum of "where clauses".
What I want to do is to get steeped into these situations, and develop
the syntax and semantics of the kind of "where clauses" that
mathematicians routinely teach and write. I think that as I move from
elementary set theory to the foundations of the number systems, this
issue will arise in vicious enough form to be getting to the heart of
In summary, we need a new kind of sugar, that is a well thought out
form of the very old kind of sugar that is actually in use everywhere
ZC and ZFC are both of great importance. ZC is a little stronger than
HOL Light,, which is what Harrison uses. The two systems are
foundational and philosophically different - although I can see that
such differences are not of much concern for the proving community.
"The obvious reason is that the type checker performs a strong and
automatic confirmation that a given expression is well-formed. But
probably a more significant reason is that, in combination with
syntactic overloading, we can achieve a notation that is both
expressive and tolerably readable. We can at least use the same symbol
(+) for natural numbers, integers, complex numbers and even formal
polynomials and matrices, and perhaps more impressively, we can set up
the basics of limits and continuity in the abstract setting of
topological spaces and successively refine them to metric spaces,
Euclidean spaces, etc, etc while using the same symbols for them and
without having to duplicate mathematical arguments in these different
Right now, my instincts are that this can be appropriately done in the
usual set theoretic setups with some new careful general sugaring, and
in a way that is FAR MORE than "tolerably readable" - for axioms,
definitions, theorems, not proofs. The use of the same symbols issue
seems pretty well reflected in the early undergrad material on the
number systems, and already here there is something serious to think
about in terms of just the right kind of sugar. As for
limits/continuity, I am not quite sure what you are driving at that is
more than with the number systems, and so it would help to have a
watered down toy example of where what I am talking about needs a new
By the way, John Harrison called my attention to MeaMath. How does
that setup fit into the picture.
More information about the FOM