FOM: A dual view of foundations

Todd Wilson twilson at csufresno.edu
Sun Feb 27 19:19:09 EST 2000


Surveying the postings on FOM concerning alternative foundational
schemes, and the criticisms that have been raised against such shemes,
it seems to me that many of the arguments boil down to a few general
points, which I would like to try to bring to the fore by asking some
general questions concerning the use of set theory in f.o.m.  First, a
basic observation.

Set theory seems to play two distinct (yet related) roles in f.o.m.
On the one hand,

    Set theory provides a very abundant but for the most part
    meaningless ontology for mathematics.

By "meaningless", I mean merely that the ontology that set theory
provides need not, in order to succeed in its function as an ontology,
carry any instrinsic meaning.  What matters is that there are enough
objects available, and enough relations between these objects
representable, that we can map whatever we wish to speak about onto
what's provided.  When we define real numbers using Dedekind cuts, for
example, we surely do not believe that a real number *really is* an
ordered pair of two particular sets of rational numbers, which are
themselves ordered pairs of ... (or that ordered pairs themselves
really are doubleton sets consisting of overlapping singleton and
doubleton sets); rather, we understand this to be an "implementation"
of the abstract notion of real number, which is validated by the fact
that, when all of the basic relations concering real numbers are
similarly specified, all of the relations that we think ought to hold
of real numbers actually do hold in our implementation.  The function
of set theory, in this context, is simply to provide the raw materials
for our implementations and the means to reason about them with
sufficient precision so as to satisfy our demands for rigor.
Incidentally, this same point of view should also available to the
"platonistically impaired" (smile):  for them (us?), our
representations are just configurations in a formal system, and the
aforementioned abundance refers to the scope or range of
configurations possible and our ability to establish the "right"
theorems about them.

On the other hand,

    Set theory provides an axiomatization of our pre-theoretic notion
    of collection (or at least iteratively conceived collections).

The axioms of set theory speak directly about the properties of the
membership relation and about the existence of sets that behave in
certain ways with respect to membership.  These axioms can be seen as
an attempt to capture everything about the notion of collection that
can be subjected to rigorous examination.

The concerns of ontology are mainly practical:  do we have enough
objects and relations around, and are the reasoning methods sound and
complete enough to develop the mathematics we want to develop?  The
concerns of axiomatization are mainly philosophical:  have we captured
the pre-theoretic notion(s) as soundly and completely as possible?
Nevertheless, we often see these concerns overlap.  For example, it
would be difficult to make effective use of a bare ontology if the
objects didn't already come with some structure that was exploitable
in a natural way.  After all, there really isn't much that is
arbitrary about the use of Dedekind cuts to represent real numbers; if
we agree that a real number is completely specified by knowing which
rationals are strictly less and strictly greater than it, and that a
rational number x is completely specified by knowing which equations
of the form nx = m it satisfies for integers n and m, then the
Dedekind cut implementation of real numbers is quite natural.  Despite
this interaction, however, I think that the foundational roles of
ontology and axiomatization are basically orthogonal.  For example, I
don't think it is beyond imagination that an ontological system could
be devised that was sufficient to represent the objects and reasoning
of mathematics but wasn't simultaneously an axiomatization of some
given pre-theoretic notion(s).

Now, getting back to FOM, it seems that in discussions of set theory
as a f.o.m., and in criticisms of alternative f's.o.m, the
foundational roles of ontology and axiomatization are often confused.
For example, the recent discussion on "Do we need more axioms?" seems
to be concerned only with set theory in its second role, as we no
doubt already have enough objects available via ZFC to represent
whatever we might wish to represent.  That is, the drive to ever
larger cardinals is fueled by a desire to understand the limits of our
notion of collection and to determine whether there is hidden there
any secrets that will shed light on the striking incompletenesses of
our current axiomatic setup, not because we have run out of objects
for our representations and are in need of more of them.  Similarly,
the criticisms of categorical foundations of mathematics overlook the
fact that topos theory can provide an ontology that is just as
abundant as the one set theory provides, and therefore can function
just as adequately in this role.  (It also does quite well as a global
theory of sets and functions, where sets and functions are viewed
"from the outside", as opposed to being built up from elements "on the
inside".)

So, to my questions:

1.  To what extent is this dual view of foundations accurate and/or
    relevant?

2.  Is it indeed possible that "an ontological system could be devised
    that was sufficient to represent the objects and reasoning of
    mathematics but wasn't simultaneously an axiomatization of some
    given pre-theoretic notion(s)"?

3.  How much of the success of set theory as a f.o.m. is ontological
    and how much is axiomatic?  How much is due to its integration of
    the two?

4.  Is the integration mentioned in question 3 largely fortuitous, or
    is it (contra question 2) a fundamental aspect of all foundations?

and, as an additional and slightly loaded question concerning
alternative foundational schemes,

5.  Where does naturalness fit into the picture?  As for ontology,
    there is of course an advantage to being able to represent
    mathematics in a simple and direct way.  But once a representation
    and its properties are established, the representation need not
    (and indeed should not) trouble us further.  In other words, if
    forging a representation is a one-time job, why should we discount
    a foundational scheme because it's a little harder to set up the
    basic representations, since one this is done, mathematics can
    (for the most part) proceed as usual.  And as for axiomatization,
    if a foundational scheme proposes to capture some pre-theoretic
    notions from a certain point of view, shouldn't it be judged on
    whether or not it has captured these notions accurately from this
    point of view and not on whether other notions or points of view
    are expressed naturally in terms of the ones at hand?

-- 
Todd Wilson
Computer Science Department
California State University, Fresno




More information about the FOM mailing list