[FOM] Model theory and foundations

Stephen G Simpson simpson at math.psu.edu
Fri Jul 25 19:21:44 EDT 2003


This is a follow-up to recent postings of Harvey Friedman, Mon 21 Jul
2003 03:08:53 -0400 and Mon 21 Jul 2003 03:21:12 -0400, and John
Baldwin, Tue, 22 Jul 2003 08:55:01 -0500 and Tue 22 Jul 2003 13:05:32
-0500, and Alasdair Urquhart, Tue 22 Jul 2003 10:12:23 -0400 and Tue
22 Jul 2003 14:39:21 -0400.

Friedman Mon 21 Jul 2003 03:08:53 -0400 says:

 > So in deference to this fantastic legacy that affects us all on the
 > FOM list, that propels foundations of mathematics at its heights to
 > levels of general intellectual interest that are only matched by
 > the greatest of all intellectual achievements of the 20th century,
 > we should not use the term "foundational" lightly. When using this
 > term, we need to keep this legacy clearly in mind.
 > 
 > If one is to think of some development as "foundational", one
 > should apply the standards that are appropriate in light of this
 > legacy.

Yes.  I think this is where Baldwin goes wrong.  Insufficient awe and
reverence in the face of truly epochal work in foundations of
mathematics, e.g., the work of Goedel and Turing.  Consequent failure
to apply appropriate standards when evaluating "foundational" claims
of other research in mathematical logic.

What I don't understand is, why do applied model theorists make these
mistakes?  It is true that model theorists have had some success in
drawing interesting and illuminating connections between mathematical
logic and core mathematics (algebra, analysis, etc).  But, do they
really think that core mathematics is at the pinnacle of contemporary
intellectual life?

Elsewhere in the same FOM posting, Friedman thanks Baldwin for a
high-level articulation of some important themes of contemporary model
theory.  I second this, and I hope Baldwin will continue.  Friedman
goes on to express hope that such articulation will contribute to the
development of truly foundational topics which will be of general
intellectual interest.  Again I concur, but I think there is only an
outside chance of this happening.

In Mon 21 Jul 2003 03:21:12 -0400 Friedman said

 > Let me close by noting that Simpson, Dependence relations in model
 > theory, July 17, 2003, 3:20PM, has written a very thoughtful and
 > thought provoking response to an earlier posting of Baldwin, and
 > Baldwin has not directly addressed any of Simpson's points. I would
 > like to see if Baldwin, Simpson, and Friedman are, or can get on,
 > the same page.

I concur with these sentiments.  By the way, the title of my posting
of Thu 17 Jul 2003 12:06:19 -0400 was "Model theory and foundations
IV", not "Dependence relations in model theory".

Alasdair Urquhart Tue 22 Jul 2003 10:12:23 -0400 says:

 > Incidentally, a recent article that I found interesting that
 > discusses the "foundational" aspects of model theory is the piece
 > by Angus Macintyre in the most recent issue of the BSL (June 2003).
 > The basic theme of the piece is that model theory is moving away
 > from the Tarski paradigm towards a more geometrical inspiration.

Yes, Macintyre's views are interesting.  But, in the past, Macintyre
has declined to participate in discussions of his views, here on the
FOM list.  This is a shame.

Urquhart Tue 22 Jul 2003 14:39:21 -0400 says:

 > the Henson/Keisler paper is truly marvellous in explaining why
 > things like the construction of Loeb measures (that depend on
 > saturation properties) are so powerful.

I would dispute this.  

Yes, Henson/Kaufmann/Keisler (JSL 1984, 1986) prove that certain
formal systems of nonstandard analysis enriched with saturation
schemes are mutually interpretable with full 3rd order arithmetic,
etc.  But then H/K/K go on to suggest that this may explain why
nonstandard analysis is, allegedly, mathematically powerful.  The
suggestion appears to be unjustified, because H/K/K do not present
specific *mathematical* -- dare I say "mathematically natural"? --
statements which are strong in the sense of interpreting
proof-theoretically strong systems such as 3rd order arithmetic.  The
saturation principles themselves do not qualify, because they are
metamathematical, not mathematical.

Compare this with Reverse Mathematics, where one proves that specific
*mathematical* theorems -- e.g., the Podewski/Steffens Theorem, "every
countable bipartite graph has a matching M and a vertex covering C
such that C consists of one vertex of each edge of M" -- are logically
equivalent (over a weak base theory) to fairly strong set existence
axioms.  Most of these set existence axioms are formulated as
subsystems of 2nd order arithmetic.  See my book.  The accumulation of
such results goes a long way toward explaining the role of strong set
existence axioms in mathematics.

 > Contrary to a universally accepted opinion, there are results in
 > areas like stochastic processes that can ONLY be proved by
 > nonstandard means, and are unprovable by standard methods.

I question this.  In what sense are these stochastic results known to
be unprovable by standard methods?  After all, they are provable in
*standard* ZFC, are they not?

 > This surprising fact is explained in the work of Fajardo and
 > Keisler.  I

I am not familiar with this work.  Alasdair, could you please
elaborate?

 > I am increasingly of the opinion that nonstandard analysis is one
 > of the big foundational discoveries of the 20th century.

I like nonstandard analysis a lot, but I don't see it as measuring up
to the epochal work of Goedel and Turing.  In particular, one can't
use nonstandard analysis as the basis of an exposition of mathematics
from the ground up.  Part of the difficulty is that, in a precise
sense, one can't give an example of an infinitesimal.  There was an
interesting discussion of this issue back in the Golden Age of FOM.
Have a look at the archive for November 1997.

-- Steve

Stephen G. Simpson
Professor of Mathematics
Pennsylvania State University
http://www.math.psu.edu/simpson



More information about the FOM mailing list