FOM: role of formalization in f.o.m.
Stephen G Simpson
simpson at math.psu.edu
Thu Jun 3 20:34:28 EDT 1999
In response to Robert Tragesser's questions of 1 Jun 1999 07:31:05,
let me indicate why I think formalization is of the essence in
foundational research, including f.o.m.
I will *not* attempt to be ``unprecedentedly careful'' (cf. Friedman
31 May 1999 17:15:47). All I want to do is convey a certain idea or
outlook that I think is common currency among f.o.m. researchers.
When I teach introductory courses in mathematical logic and f.o.m., I
always try (and sometimes even succeed!) to convey this outlook to the
students. Recently I tried to convey something of this same outlook
in an article intended for a general audience. The article will
appear as a chapter of a volume called ``The Treasury of Philosophy'',
to be published by the Book-of-the-Month Club.
When a scientific discipline or field of study reaches a certain
degree of maturity, it is desirable to reify or codify the existing
knowledge in terms of basic concepts and principles. An example is
Euclid's reification of geometry in the Elements, in terms of ``common
notions'', ``axioms'', ``postulates'', ``definitions'', ``theorems'',
``propositions'', ``lemmas'', etc.
The kind of reification of subjects that I have in mind is *perhaps*
the same as what Tragesser describes as ``the process of uncovering
and articulating and recasting a subject matter according to its first
or essential or most central principles''.
Tragesser and maybe other philosophers refer to this as
``formalization''. I reserve the term ``formalization'' for a
particular, precise version of this, based on the predicate calculus.
> But of course this process was first discovered by Plato (on at
> least Bill Tait's reading of Plato or moy reading of Bill Tait's
Could Tragesser please give a reference for his idea that this notion
of ``formalization'' (Tragesser's term) goes back to Plato? I think
of it as going back to Aristotle. But then again, I am not a
philosopher or a historian of philosophy.
End of digression. ]
If we set out to perform and study the reification of subjects in a
highly systematic manner, we are led to questions such as: Which
concepts and propositions are to be taken as basic for a given
subject? How are the non-basic concepts and propositions to be
derived from the basic ones? How can we be sure that nothing has been
The predicate calculus has emerged as a precise language in which to
study these and similar questions. In particular, we can use the
strictly formal discipline imposed by the predicate calculus in order
to say with certainty and precision that no basic concepts or
assumptions have been omitted.
>From this point of view, a completely worked out reification of a
subject is nothing but a formal theory T in the predicate calculus.
The basic concepts are then known as the primitives of T, and the
basic propositions are known as the axioms of T. The process of
working out a formal reification of a subject, usually in the
predicate calculus or some inessential variant of it, is what I and
other f.o.m. researchers call *formalization*.
Some of the considerations which dictate the use of the predicate
calculus here, rather than some other formalism, are:
(i) The predicate calculus as a language is sufficiently rich and
flexible to express every scientific proposition, but not so rich as
to be able to express a lot of obviously flaky or non-scientific
(ii) The predicate calculus is topic-neutral, in the sense that it
has no subject matter or scientific assumptions of its own, so it
forces us to explicitly introduce all of the basic concepts that are
needed for the given subject matter as primitive predicates of T.
Other concepts are then introduced as additional predicates in terms
of definitional extensions.
(iii) The predicate calculus provides a precisely defined, correct,
adequate notion of logical consequence. The G"odel completeness
theorem shows convincingly that, for propositions A and B in the
predicate calculus, A implies B formally if and only if B follows
from A *necessarily*, i.e., regardless of the meanings of the
predicates that are involved, i.e., in all conceivable structures of
the appropriate type.
(iv) Using this, we have a way to be sure that a given set of axioms
is adequate, by verifying that all the known propositions of the
given subject are logical consequences of the axioms.
The above scheme of formalized reification of subjects in the
predicate calculus is obviously a scientifically desirable goal.
However, to date it has been convincingly carried out only in a few
cases, mostly limited to mathematics. Outstanding examples:
first-order arithmetic, Tarski's elementary geometry, second-order
arithmetic (i.e. arithmetic plus geometry), ZFC. Thus the study of
these and related formal theories in the predicate calculus and the
formalization of mathematics within them is an essential part of
modern f.o.m. research.
My reading of the history of the contemporary 20th century concept of
``mathematical rigor'' is that it evolved, on a parallel track, from
the same sorts of considerations that also gave rise to the above idea
of formalization within the predicate calculus. This is part of why
it is usually or at least often a fairly straightforward exercise to
formalize detailed rigorous mathematical expositions in the predicate
In principle, none of this has anything in particular to do with
mathematics. In principle, the above general scheme (formalized
reification in the predicate calculus) is applicable to any scientific
subject whatsoever. But to date it has been worked out only in
mathematics. In this sense, f.o.m. serves as a model or example of
what can be hoped for in the future in the way of rigorous or formal
foundations of other scientific subjects.
I hope this explanation addresses some of Tragesser's concerns.
More information about the FOM