# [FOM] free logic

Rob Arthan rda at lemma-one.com
Mon Oct 26 17:08:25 EDT 2015

> On 26 Oct 2015, at 07:36, Mitchell Spector <spector at alum.mit.edu> wrote:
>
> I think you're right, Mario.  Instead of taking "x > y" to abbreviate "NOT(x <= y)", we should take it to abbreviate "(x is defined) & (y is defined) & NOT(x <= y)".  If we do that, everything works nicely.

This also falls out in the wash if (working with set theory with \in as the only non-logical symbol)
you take "x > y” to abbreviate "(x, y) \in (>)”.

This free logic approach to undefined terms is not unknown in computer science:
it was adopted in the Z notation (https://en.wikipedia.org/wiki/Z_notation) in the late 80s (http://spivey.oriel.ox.ac.uk/~mike/zrm/). Many years later the ISO standard for Z allowed
other approaches (primarily to support people, like me, who wanted to implement efficient
and usable mechanized proof tools for the language).

After many years of working with Z using the standard convention of first-order logic
whereby undefined terms have an undefined value (so 1/0 = 2/0 is unprovable),
I have been heavily involved over the last 8 years or so with a project that makes
extensive use of Z and follows the original convention. My view is that it still
feels like a cunning trick which has a tendency to backfire: I prefer to write
formalisations that are explicit about the conditions necessary to make expressions
denote what they are meant to.

The most obvious place where it backfires is that "x > y” and “\lnot x \le y”
don’t mean the same thing, which people find surprising. More subtly, it is
often not very robust against changes in a formalisation as one’s ideas develop:
e.g., a formula like  “x = 1/y” originally used in a context where y
is known to be non-zero automatically becomes false if the constraints
on the context are relaxed, but that may well not be what you really want.
Finally, it requires what someone (Fred Brooks?) called the "language lawyers”
to be brought in to pontificate on arcana like “{x | x = 1/0 \lor x = 2} = {1/0, 2}”
which depend on the fine details of how the language designers have chosen to
define the set display construct.

Regards,

Rob.