[FOM] partial functions / undefined terms in machine proof
Larry Paulson
lp15 at cam.ac.uk
Fri Sep 18 11:58:51 EDT 2015
I am posting John Harrison’s 20-year-old note on this topic as a service to the group. I came across a URL referencing this note, but it no longer worked. As the URL happened to reference the University of Cambridge, I dug around a bit and found the file on our own servers.
I was sure that John had written a more polished paper on this topic, but I have failed to find it.
> To: qed at mcs.anl.gov
> Subject: Re: Undefined terms
> In-reply-to: Your message of "Thu, 18 May 1995 12:39:59 MDT." <199505181835.NAA19932 at antares.mcs.anl.gov>
> Date: Fri, 19 May 1995 10:54:02 +0100
> From: John Harrison <John.Harrison at cl.cam.ac.uk>
> Message-ID: <"swan.cl.cam.:266770:950519095422"@cl.cam.ac.uk>
This is a very important topic. I can think of at least four main approaches to
the problem of undefined terms.
[1] Resolutely give each partial function a convenient value on points
outside its domain. For example, set:
|- inverse(0) = 0
This has the advantage that you can state many theorems more elegantly:
|- inverse(inverse(x)) = x
|- 0 < x <=> 0 < inverse(x)
|- inverse(-x) = -inverse(x)
|- inverse(x * y) = inverse(x) * inverse(y)
But it also gives you strange "accidental" theorems like
|- inverse(0) = 0
itself.
[2] Give each partial function some arbitrary value outside its domain. This
gives fewer convenient theorems, but also fewer accidental ones.
Nevertheless you still get some, e.g. assuming we define
|- x / y = x * inverse(y)
then we have
|- 0 / 0 = 0 * something = 0
[3] Encode the domain of the partial function in its type and make its
application to arguments outside that domain a type error. This works well
in situations where definedness is implicit, like differential equations
where
|- d f(x) / dx = H[f,x]
includes (I think) an implicit assertion that f is differentiable iff
H[f,x] is defined. On the other hand in complicated situations in
analysis, one could imagine the types becoming staggeringly complicated,
and this approach disallows routine manipulations like:
|- inverse(-x) = -inverse(x)
|- inverse(x * y) = inverse(x) * inverse(y)
without incurring additional typechecking obligations (which might be
nontrivial if x is complicated).
[4] Have a true logic of partial terms. This is the most flexible, since one
can define multiple notions of equality; e.g. "both sides are defined and
equal" or "if one side is defined so is the other and they are equal".
The downside is that the logic becomes more complicated. I don't know how
troublesome this is in practice. Anyway, I think there's a strong argument
that this is how mathematicians normally think informally.
As far as I know, HOL takes approaches [1] and [2] according to the whim of the
user, Nuprl and PVS normally take approach [3], while IMPS takes approach [4].
Old LCF has (used to have?) something analogous to approach [4], and Lambda did
too, though I think it was abandoned in the face of user pressure for a simpler
logic. I'm not sure about other systems. For generic provers like Isabelle, I
suppose you have a free choice.
There's a good discussion of these issues in Farmer's paper "A Partial
Functions Version of Church's Simple Theory of Types" (JSL vol. 55, 1990; pp.
1269-1291). There's also an interesting-looking paper by Feferman on this topic
which I've just received a copy of (it's at home now so I can't give a
reference).
John.
More information about the FOM
mailing list