[FOM] General Foundations Discussion

Harvey Friedman friedman at math.ohio-state.edu
Sat Jul 26 23:07:56 EDT 2003

Reply to Franzen 7/26/03  11:58AM  Model theory and foundations,
Urquhart  7/22/03 2:39 PM Model Theory and foundations
and Mahalanobis 7/26/03 2:22PM Model theory and foundations.

Franzen wrote:

> Steve Simpson says:
>> I think this is where Baldwin goes wrong.  Insufficient awe and
>> reverence in the face of truly epochal work in foundations of
>> mathematics, e.g., the work of Goedel and Turing.  Consequent failure
>> to apply appropriate standards when evaluating "foundational" claims
>> of other research in mathematical logic.
> I don't think this is a fruitful line of argument. You may recall
> that there was some controversy in connection with your own FOM
> posting "Friedman's independence results, an epochal f.o.m. advance",
> in which you claimed that the work in question "represents
> tremendously important progress in f.o.m.". The question to what
> extent your comments were justified was never resolved, and I don't
> think there is any point at all in dwelling on what does or does not
> constitute sufficient "awe and reverence" or on what is or is not
> "truly foundational". Rather, let people present whatever
> considerations and results that they consider relevant to the
> foundations of mathematics, as long as the moderator does not find
> their contributions obviously irrelevant or their claims palpably
> absurd.

I disagree strongly with this point of view. I *know* just how fruitful it
is to dwell carefully on what is "truly foundational", and to frame research
programs that productively bear directly on central foundational issues.
This requires constant consideration and reconsideration of what these
central foundational issues are, and what can reasonably be achieved at any
given stage of development. One also needs to obtain feedback from people in
related fields such as philosophy, computer science, and mainstream
mathematics to test and confirm/disconfirm ones views and instincts.

Sufficient dwelling on what is "truly foundational" causes one to
intensively look for additional programs in f.o.m. whose bearing on central
foundational issues are clearer and clearer. In this way, the foundational
impact of the research gets stronger and stronger.

In fact, the lack of such dwelling on what is "truly foundational" has
caused, or at least accompanied, the long term move away from foundations of
mathematics and philosophy that mathematical logic has made within the
mathematics community.

The bright spot is that, in the meantime, a lot of technical machinery and
expertise has been developed in mathematical logic, that can now be put to
good use for f.o.m.

However, it is not fruitful to take some technical machinery and say "what
can I do with it for f.o.m.?" Rather, one should dwell on what is "truly
foundational" and work from there, armed with the knowledge of what kind of
technical results are obtainable given what we know.

In fact, one of the things I would like to see come out of the FOM list is a
reworking and rethinking of a subject like model theory so that it becomes
much more foundational than it is now.

If I had more time right now, I would have already posted some perhaps at
first naïve and mundane ideas about foundational aspects of the notion of
dependence that Baldwin was discussing in the context of modern developments
in model theory. Hopefully, this might be extended and enriched into
something more substantial for f.o.m.

Also, the concrete form of the upward Lowenheim Skolem theorem that I was
talking about earlier, is the first stage in the development of a subject

Complete Theory of Everything: Logic in the Universal Domain

which is directly related to original conceptions of Frege in his
development of first order predicate calculus. I gave a series of lectures
on this recently at the Princeton Philosophy Dept in my capacity as Visiting
Professor of Philosophy last Fall. I sought and received significant
feedback from analytic philosophers there about this and other topics in

So this "complete theory of everything" is just one basic but focused
example of what I mean by reworking some elementary model theory from an
overtly foundational perspective.

With regard to the specific historical situation on FOM that Franzen is
referring to: Continued preoccupation with what is "truly foundational" has
led to major advances in what is now called Boolean Relation Theory. This is
the current result of several decades of constant dwelling on what is "truly
foundational". As predicted by some of my colleagues, this dwelling on what
is "truly foundational" led to considerable further developments that
increased the power of the results. This was carefully tested and confirmed
through a number of contacts I have with the high level mainstream
mathematics community.

I would certainly welcome a serious and unbiased discussion of what is
"truly foundational" about Boolean Relation Theory, here on the FOM. I don't
know if it would on the whole be fruitful or not. In any case, obviously it
goes without saying that I am not going to look very unbiased when I discuss

Urquhart wrote:

> John Baldwin is quite right to call attention
> to the fact that saturation principles are central
> in recent nonstandard analysis, and also to point
> out the brilliant work of Ward Henson, Matt Kaufmann
> and Jerry Keisler on the strength of nonstandard
> analysis.  

By far the most well known and visible foundational point of nonstandard
analysis is the opposite point - that one gets conservative extensions. In
1967 as a student, I developed an obvious formal system of nonstandard
arithmetic extending Peano Arithmetic, and showed that it was a conservative
extension of Peano Arithmetic. This has been followed up carefully in recent
years by a number of people including Avigad.

Mathematicians these days are not generally interested in abstract set
theoretic mathematics, but rather in concrete mathematics, which is at least
separable - at the outside, in Borel functions on Polish spaces (complete
separable metric spaces). The vast bulk of mathematics lives not only within
Polish spaces, but within very very tiny fragments of Borel functions. E.g.,
pieceswise continuous or even analytic (power series representations) or
semialgebraic, or even linear and semilinear, functions and structures, as
well as, of course, the directly countable or finite mathematics.

I know of no results that would give any indication that nonstandard
analysis would have any substantial power in proving any results of this
normal character that could not be proved by standard methods - and in fact,
the use of the nonstandard analysis for proving such results normally is
KNOWN to add no such power. I.e., is a conservative extension over the
standard theory. 

It may be true that in a relatively strong set theoretic framework to begin
with, one might use a strong set theoretically based formulation of
nonstandard analysis, and in the resulting set theoretic environment, a
result of that kind might appear. But careful examination will reveal that
this does not make the point that I am sure you are intending to make.

Of course, it is quite possible that some use of nonstandard analysis might
make some even fairly concrete results easier to prove than if nonstandard
analysis was not used. However, it is my understanding that most
mathematicians would greatly prefer the standard methods, and it is well
known how to reasonably remove the nonstandard analysis.

I am sure that there are some interesting candidates for counterexamples to
the previous paragraph, but I am also sure that they are quite

It would be interesting for the FOM to discuss some candidates for
counterexamples to this. I.e., examine them and see what is involved in
removing the nonstandard analysis.

In this connection, some examples of Gromov are supposed to be interesting
to look at. 

On the other hand, the use of nonstandard models in direct f.o.m.
investigations is well known and obviously not something one would want to
remove. However, the proof theorists have gotten interested in removing its
use for some results (conservative extension results, actually!), and I
wrote on the FOM some time ago about a general method I discovered to remove
the use of nonstandard models in favor of finitist methods.

I spend a good deal of my life using countable nonstandard models in order
to establish independence results and other f.o.m. results. E.g., see

Harvey Friedman, Working with Nonstandard Models,

However, I see no apparent connection between my use of nonstandard models
and what people like to do who champion nonstandard analysis.

> Incidentally, the Henson/Keisler paper is truly
> marvellous in explaining why things like the
> construction of Loeb measures (that depend on
> saturation properties) are so powerful.  The
> system of nonstandard analysis with countable
> saturation gives us the power of third order
> arithmetic!  

This is misleading. I have been involved in finding truly mathematical
principles which, without any logical principles, give substantial logical
strength. I have posted on this extensively here on the FOM, and went to
great trouble in order to do this in a convincing way to even get to the
power of EFA = exponential function arithmetic. Also in those postings, I
got somewhat farther than that, and promised that I could go much farther,
while maintaining the standards of "strictly mathematical". These postings
go under the name "Strict Reverse Mathematics".

I don't expect that nonstardard analysis will play any serious role in
Strict Reverse Mathematics. I stand to be educated.

> Contrary to a universally accepted opinion, there
> are results in areas like stochastic processes
> that can ONLY be proved by nonstandard means,
> and are unprovable by standard methods.
> This surprising fact is explained in the work
> of Fajardo and Keisler.

This can make sense only in some very special notion of ONLY.

The only examples of mathematical results that are known to be unprovable by
the usual notion of standard means, have nothing to do with nonstandard
analysis, and are of course various independence results from set theory,
starting with Godel, up to the present.

You obviously mean something off of normal terminology here, and so it would
be interesting and valuable for you to elaborate on just what you mean.

> I am increasingly of the opinion that nonstandard
> analysis is one of the big foundational discoveries
> of the 20th century.  Model theorists need to promote
> their own foundational successes.

I see no basis for this view, although the kind of claims you seem to make
for nonstandard analysis in your posting, if interpreted carelessly in ways
I am sure you do not intend - nor would Keisler - might form a basis for
this view. 

My own view is that although noteworthy, it is far below the level of the
major advances in 20th century f.o.m., for very clear reasons.

However, there are some possible developments that could change the
situation. But the developments I have in mind would not be expected on the
basis of the development of nonstandard analysis in the 20th century, and do
not seem to be related to what people like to do who champion nonstandard

Mahalanobis wrote:

>> I like nonstandard analysis a lot, but I don't see it as measuring up
>> to the epochal work of Goedel and Turing.  In particular, one can't
>> use nonstandard analysis as the basis of an exposition of mathematics
>> from the ground up.  Part of the difficulty is that, in a precise
>> sense, one can't give an example of an infinitesimal.
> I don't know much about non-standard analysis. I will dare ask a
> question here in hope that at the end I might end up wiser. Why is the
> non-existence of infinitesimals an issue? I have tried to understand the
> concept of continuity in classical mathematics and it seems to me that
> the use of epsilon-delta is highly conceptual. So what is wrong with
> conceptual infinitesimals?
One can easily define the real numbers explicitly, and explicitly define
what a continuous function is. In fact, THE field of real numbers is the
UNIQUE complete ordered field, up to isomorphism.

However, apparently one cannot define THE nonstandard real numbers
explicitly, and explicitly define what a nonstandard continuous function is.
THE field of nonstandard real numbers makes no sense, at least currently.

In fact, various undefinability issues have been discussed here on the FOM
some years ago. I got involved in the discussion, and proved some negative
results. But I remember that there were still some interesting relevant
questions that remained open. Anybody care to restart this productive
discussion where it left off?

The apparent lack of THE nonstandard real numbers is a DECISIVE drawback
against the use of nonstandard analysis in its most obvious foundational
role (although there may be other roles). This is what is behind the
categorical rejection by the mathematics community of teaching calculus this
way. Keisler did write a big calculus book that promptly went out of print.

Mathematicians wish to concentrate their attention on mathematical
structures that are not only explicitly defined, but also are canoncial in
various hard nosed senses.

Abraham Robinson did show that use of nonstandard reasoning in the sense of
nonstandard analysis, was justifiable, in the sense that any standard result
proved this way could be proved by standard methods. He did this through his
usual construction of A (not THE) model of the nonstandard reals (and
nonstandard functions on them, etc.). This was done by ultrapowers or by

Harvey Friedman

More information about the FOM mailing list