FOM: mainline set theory

Harvey Friedman friedman at math.ohio-state.edu
Sun Sep 13 22:09:43 EDT 1998


This is a response to Shoenfield 3:31PM 9/12/98.

Shoenfield offers a good conventional explanation of the excitement within
the set theory community about a high point in their specialty. However,
before beginning a detailed analysis, let me make a summary of the main
points I will be making.

1. As is common in most specialties in mathematics today, only the
specialists truly feel the excitement. People in set theory certainly know
what their excitement is all about, and feel it. And from the vantage point
of set theory, the excitement has justification. People in mathematical
logic sort of know why the set theorists are excited, even though they
don't generally feel the excitement themselves. And people outside
mathematical logic generally have no clue whatsoever, even after reading
such explanations.

2. The reason for this is that such explanations are not couched in terms
of matters of general intellectual interest (gii), even though I know from
personal experience that most of the set theorists involved are at least
somewhat sensitive to gii. Indeed, if the explanation were instead couched
in terms of general intellectual interest, then the crucial limitations in
this sort of investigation would quickly become apparent.

3. There is a profound and crucial limitation to this kind of investigation
which was already apparent to some extent in the 1930's, and glaring by the
1960's, and critical by the 1970's. This can be seen rather easily by
stepping back and trying to assess set theory with reference to the major
revelations in the foundations of mathematics.

4. Once these limitations are taken into account, the more critical issues
of gii become apparent. This is the very process that encouraged me to
spend 30 years on the program I began to discuss systematically in 5:44AM
9/5/98 "Background on incompleteness."

>     In many years of teaching and writing about axiomatic set
>theory, I have developed a picture of the subject.   The picture
>has three chapters (up to the present).   The first consists of
>the formation of the axiom system ZFC and the development in it of
>set theory through, say, elementary ordinal and cardinal theory.
>A conclusion of this chapter is that virtually all of accepted
>mathematics can be formulated and derived in ZFC.   This conclusion
>has many applications, but it does not, in my opinion, shed much
>light upon the nature of set theory.

But your Chapter 1 includes all that the typical mathematician outisde
logic - including the most celebrated core mathematicians - is conscious
of, even at this late date. It is very revealing to ponder why this is so.

>     Chapter 2 is devoted to showing that certain statements are
>unprovable in ZFC.   One does this by producing a model of ZFC in
>which the statement is false; so the main problem is to find ways
>of constructing models of ZFC with special properties.

Recasting this somewhat in terms of gii, one shows that certain
mathematical statements cannot be proved true or false using the commonly
accepted axioms for mathematics (ZFC). Here, and elsewhere, such
"independence results" (necessarily) use the compelling assumption that the
commonly accepted axioms for mathematics (ZFC) are themselves consistent.

Of course, this leaves open for the moment the crucial question of what
kind of mathematical statements are shown not to be proved true or false
using the commonly accepted axioms for mathematics. This matter will be
addressed in some detail later in this posting.

>The first
>important step was taken by Godel, who in 1934 introduced a model
>L.   Godel was able to prove that many results, such as GCH, were
>true in L; this implied that their negations were not provable from
>the axioms.   This work was continued by Jensen and his followers,
>who made an intense study of the structure of L.

There is an important feature of L that is highly relevant to our story.

***Every mathematical statement ever shown NOT to be proved true or false
using the commonly accepted axioms for mathematics HAS been shown to be
true or false in L.***

This can be put another way. Godel showed that the commonly accepted axioms
for mathematics (ZFC) remains consistent if the axiom "every set is
constructible (i.e., every set lies in L) is added. Then

***Every mathematical statement ever shown NOT to be proved true or false
using the commonly accepted axioms for mathematics HAS been shown to be
true or false using the commonly accepted axioms for mathematics plus
"every set is constructible."***

This already points to a profound weakness of this line of investigation in
terms of gii; namely, mathematicians can appear to avoid the mathematical
independence results entirely by simply accepting that the universe of
mathematical objects consists of the constructible sets only (L); or
alternatively, simply incorporate the axiom of constructibility, usually
written V = L, which asserts that every set is constructible (i.e., lies in
L).

However, for totally different reasons, neither mathematicians outside
logic, nor set theorists embrace the axiom of constructibility (V = L).
Here are some reasons, which are instructive:

a. Mathematicians don't embrace V = L because they "get around" the
independence results in set theory that you refer to in another way: by
ignoring them. We will later discuss why and how they can choose to ignore
them.

b. Set theorists don't embrace V = L because of the informal idea that V =
L "limits" the set theoretic universe, contrary to the idea that the
"universe" is to contain "everything." Also, most of them have an agenda
that is incompatible with V = L; e.g., various large cardinals are
incompatible with V = L.

[Incidentally, this informal idea of "limits" has not really been
adequately formalized].

So in a way, V = L falls through the cracks. The core mathematicians
concentrate on very specific well behaved objects originating from basic
algebra and geometry, and like to ignore almost anything else - regarded it
as fluff. Fine if it helps with the "real stuff," but not if it causes
trouble.

Thus things like "every field has a unique algebraic closure" are marginal:
its manageable, but involves a detour through more set theory than most
mathematicians are really comfortable with, and the fields they are
interested in are very particular.

>From this point of view, the independence results in set theory that you
refer to create no real difficulties for them, and so they feel no need to
consider L or V = L or axiomatic set theory for that matter.

On the other hand, the set theorists want to consider absolutely
everything, and so reject V = L as needlessly restrictive.

>     The big step in Chapter 2 took place in 1963, when Cohen dis-
>covered the technique of forcing.   This was a very versatile
>method of constructing models of set theory in which a particular
>statement is true or false.   Cohen applied it to CH, thus showing
>the unprovability of CH from the axioms.   The method was developed
>and extended by many researchers; they showed the independence of
>virtually all the known unsolved problems as well as many new
>problems.   This continues today due to the efforts of Shelah and
>his followers. It is perhaps now drying up, not from a lack of
>ingenuity by the researchers but from a lack of new unsolved problems.

You say "known unsolved problems." There is no shortage whatsoever of
unsolved problems in mathematics. You clearly mean "unsolved problems in
set theory". And what you say is completely correct. Therefore there must
be a huge difference between unsolved problems in mathematics and unsolved
problems in set theory, in your sense.

In actuality, the important method of Cohen - like any important method -
has crucial limitations. We know already from Godel, and more from
Shoenfield, that this method - at least on its own - cannot yield the
independence of certain kinds of mathematical statements from the commonly
accepted axioms for mathematics (ZFC).

In particular, it cannot yield the independence of any statement -
mathematical or not - which is pi-1-2 (Schoenfield, 1961). One can go a bit
further:

THEOREM. Let A be a pi-1-3 sentence and B be a sigma-1-3 sentence. Suppose
ZFC proves A if and only if B. Let M,N be set models of ZF with the same
ordinals. Then A holds in M if and only if B holds in N. In fact, we only
need the same omega_1.

Proof: Let A,B,M,N be as given. Suppose A holds in M and fails in N. Now
L(M) and L(N) both satisfy ZFC. Hence they both satisfy A if and only if B.
If B holds in L(M) then by Shoenfield absoluteness, B holds in M, which is
a contradiction. Hence A holds in L(M). If A holds in L(N) then by
Shoenfield absoluteness, A holds in N, which is a contradiction. Hence A
fails in L(N). We now have a contradiction since L(M) = L(N).

To get away with the weaker hypothesis, note that by Shoenfield's
constructions, every pi-1-3 sentence can be rewritten as a fixed first
order statement about L(alpha), where alpha is any uncountable ordinal, and
this is provable in ZFC. Hence we can use alpha = the common omega_1 of M
and N. QED

This Theorem is called "absoluteness of provably delta-1-3."

But the crucial question is this: how much of mathematics resides in class
pi-1-2 or provably delta-1-3?

And the answer is:

i) nearly everything in mathematics;
ii) nearly nothing in classical set theory.

Note the sharp contrast. For instance, Smale recently wrote a piece on the
"main problems of mathematics into the 21st century" for the Math.
Intelligencer. Every mathematical statement in that article is pi-1-2!!!
I.e., seen to be provably equivalent in (a very weak fragment of) ZFC to a
formally pi-1-2 sentence. In fact, most are much lower than pi-1-2 in this
sense.

The significance of this is that pi-1-2 and provably delta-1-3 setences are
not subject to the methods of set theory used for establishing independence
results that you refer to, merely in virtue of their logical form!

>     A natural problem suggested by Chapter 2 is to find new axioms
>which solve some of these problems; this is the subject of Chapter 3.
>The problem was to find the new axioms.   There were not too many
>candidates; and the known ones did not help much.   For example,
>many were willing to at least consider large cardinal axioms; but
>the large cardinals known at the time did not solve many problems.

As Shoenfield correctly indicates, large cardinals were shown to solve many
more problems. However, to this day, the grandest old problem of set
theory, the continuum hypothesis (CH), is known to be untouchable by (the
usual) large cardinal axioms. Right now, the vast majority of set theorists
don't know what to say about CH. Woodin is pushing axioms that imply that
CH is false. Foreman is pushing axioms that imply that CH is true.

D.A. Martin has said that the longer the profoundly unsettled state of CH
goes on, the more hollow the unabashedly realist viewpoint on set theory
becomes. (Martin is on the FOM and can update his views on this matter if
he has the time).

>Things changed in 1960, when Hanf showed that the measurable cardinals,
>which had been studied earlier by Ulam and Tarski, were much larger
>that previously cardinals.   It turned out that the existence of a
>measurable cardinal has important consequences for set theory.   The
>first such consequence, due to Scott, was that there is a set not in L;
>many more followed.

This is the kind of thing that makes the axiom of constructibility anathema
to set theorists. Under that axiom, there aren't any of the higher large
cardinals such as measurable cardinals. But a typical mathematician would
want to avoid measurable cardinals altogether. There is no algebra or
geometry or number theory or finite combinatorics or partial differential
equations in them.

>    At this point, the theory of projective sets enters the scene.   The
>basic theory was developed by researchers in the twenties; they showed
>that sets at low positions in the projective hierarchy had certain
>regularity properties, such as being measurable.   Godel had shown,
>using L, that these results could not be extended in ZFC.   Solovay
>proved that if there is a measurable cardinal, these regularity
>properties extended to one higher level.   This suggested finding a
>large cardinal which would extend them throughout the hierarchy.
>With this in mind, Solovay defined supercompact cardinals; but he did
>not have much success with them at this time.

But for mathematicians, the lower something is in the projective hierarchy
the "better" it is. Finite, which is much lower than the low, is best of
all. Countable, which is still much lower than the low, is also really
good. However, even an arbitrary set of rational numbers, or an arbitrary
countable set of points in the plane, is a pretty mind bogglingly unruly
and unholy pathological mess to most mathematicians. Finite unions of
intervals in the reals - now you're talking.

And then there is the huge jump to Borel sets of real numbers. The eyes
begin to glaze over, but they won't completely get up and walk out -
because of their familiarity with the idea of sigma-algebras and the like.
People know that countable limit processes are rampant in classical
analysis.

Analytic sets (in the sense of descriptive set theory)? Well, for some,
maybe OK. But the acceptability of this is apparently questionable in light
of the fact that people are not worried about potential conflict of
terminology with analytic functions, analytic sets in the theory of
subanalytic sets in analytic geometry, and subanalytic sets, etcetera.
[I.e., analytic as in local power series expansions or complex
differentiation].

Here is what I regard as the outer fringes, where some analysts are willing
to listen. I don't recall the full history of this, but Harrington and
Steel (both on the FOM!) figure most prominently.

X = "For any two analytic sets of reals (in the sense of descriptive set
theory) which are not Borel sets of reals, there is a Borel permutation of
the reals which sends the first onto the second."

Y = "For any two analytic sets of reals whose intersection with any
nonempty open interval is not Borel, there is an increasing homeomorphism
of the reals which sends the first onto the second."

It was shown that

1. ZFC + there exists a measureable cardinal proves X and Y;
2. In ZFC, X (Y) is provably equivalent to a technical large cardinal type
principle considerably weaker than measureable cardinals (for all x
contained in omega, x# exists).
3. ZFC + V = L proves that X and Y are false;
4. X and Y fail in Godel's model L of constructible sets.

This was already wrapped up, I think, in the 1970's.

The work you are talking about concerning the higher reaches of the
projective hierarchy, is obviously quite natural as a subject internal to
set theory. But it moves above even analytic sets, which is already at the
outer fringes of contemporary mathematics.

>    The next progress came from quite a different direction.   Some
>Polish logicians had discovered the regularity property of deter-
>minacy, and shown that it implied most of the known regularity pro-
>perties.   This suggested the new axiom PD: all projective sets
>are determinate.   It had many interesting consequences and appeared
>to be consistent; but there was no other reason to accept it.

Well, there was definitely a lot of unabashed realist talk at the time
about how PD should be accepted because of its consequences. And that the
periodicity picture is so much more natural than the picture you get with V
= L, and so much better to have than no picture at all, etcetera. So people
were definitely talking about "axiom of projective determinacy," and
invoking a kind of Godelian realism.

>     The next task was clearly to find the relation between large
>cardinal axioms and determinacy axioms.   The first result was due to
>Martin: if there is a measurable cardinal, then every pi-1-1 set is
>determinate.   Martin tried to use his methods to find a large car-
>dinal which implied the determinacy of other projective sets.   He
>obtained such a cardinal for pi-1-2 sets; but it was extremely large
>(much larger than supercompacts), and set-theorist were reluctant to
>use it.

>The big break came several years later.

"Reluctant to use it" is not quite how I would characterize it. Wasn't it
more a matter of disbelief that it had to be so high? The "big break" you
speak about showed that it didn't have to be so high, and paved the way for
an exact matchup, as you describe.

But of course, conceptually, I don't think anyone outside set theory can
tell the difference between, say, a supercompact cardinal and a rank into a
rank (about what Martin first used as you say), except that rank into a
rank is simpler to state. I don't see how to make any more or less sense of
these cardinals relative to each other by conceptual analysis.

>       In a paper by Foreman,
>Maggidore, and Shelah, it was shown that some problems which had been
>thought to require such very large cardinals to solve could be solved
>by cardinals smaller than a supercompact.   This was taken up by Shelah
>and Woodin, who showed that some of these not-so-large cardinals
>implied regularity properties for the projective sets.

Incidentally, you can also make a case that "not-so-large cardinals implied
regularity properties for the projective sets" is already the coveted
result - or very much of it - rather than the Martin-Steel theorem. I leave
that judgment to you and others. After all, mathematicians are familiar
with the "regularity properties" you speak of established by Shelah and
Woodin - (putting aside the crucial issue of how projective sets strike
them); whereas they are definitely not familiar with the "regularity
property" of determinacy you speak.

But for the mathematician, already another issue of relevance appears here.
Aside from the fact that for mathematicians, going down from analytic sets
is much better than going up from analytic sets, there is the question of
the ultimate importance of deriving the "regularity conditions" of which
you speak. You are particularly referring to "being countable or having a
perfect subset," "being Lebesgue measurable," and "having the property of
Baire."

But it is the overwhelming practice of mathematicians to just add such
conditions to the hypotheses of theorems. Thus one discusses only
measurable sets, or only sets having the Baire property, etcetera, at the
outset.

Of course, if mathematicians frequently and naturally came across sets in
their theorems and proofs for which an essential step is to demonstrate the
measurability or Baire property, then this would look much more relevant
and important to them.

On the other hand, obviously within set theory, what you speak of is quite
attractive and appropriate. Besides, as far as I know, all known issues
regarding the higher projective hierarchy boil down to regularity
conditions in the broadest sense of this term.

In fact, I was moved to try to formulate PD itself as a more standard kind
of what you call "regularity condition" - or at least a standard looking
statement in descriptive set theory. I came up with the following in, I
think, the early 1970's:

"every symmetric projective set in the unit square contains or is disjoint
from the graph of a Borel function."

I proved this from PD, and level by level (particularly easy). I proved
that Borel determinacy is equivalent to this for Borel sets. I had some
partial results towards showing this equivalent to PD, and Kunen completed
the argument.

>     The next step was clear: show that some of these cardinals imply
>PD.   This was accomplished by Martin and Steel, who proved: If there
>are n Woodin cardinals and a measurable cardinal larger than all of
>them, then every pi-1-n+1 set is determinate.

>    To see that Martin-Steel really gives the connection between large
>cardinals and determinacy, it was necessary to show that the large
>cardinal hypotheses in Martin-Steel are minimal.

>For this, one needed a model like L which contains
>Woodin cardinals.

>The efforts of Mitchell, Martin and Steel constructed
>the required core models.   More recent work of Steel has extended our
>knowledge of core models; but there is still much work to do here.

>I think... that the basic ideas of Chapter 3 are now in place
>for the theory of sets of reals.   About sets of higher types, such as
>sets of sets of reals, we know almost nothing (although there are a
>couple of recent results on pi-2-1 sets).

Wrong direction! Down, not up. The future place of set theory in
mathematical thought depends crucially on going lower than the projective
hierarchy.

>We have no analogue of
>determinacy, and no idea of what large cardinals (if any) will be
>useful.   It would not surprise me if the solution to these problems
>became the subject of Chapter 4.

If you don't go down rather than up, then the danger is that nobody will
read Chapters 2 and 3, and even Chapter 1 may disappear through the
replacement of ZFC with much weaker systems that are less set theoretic.
E.g., ZC (Zermelso set theory with choice) and V(omega + omega), or various
things far more radical.

>     What impresses me in the whole story is how the solution of
>problems leads to new concepts, which are then developed and, after
>a time, integrated with the old concepts.

What impresses me in the whole story is how the apparent crucial
limitations of this whole line of investigation did not lead to a
rethinking of the area by set theorists, where at least the leaders of the
field emphasized the real problems that remain for the future of set
theory. Can it connect up with normal mathematics, or is there an intrinsic
barrier to this? And even a breakthrough in the development of more
convincing attacks on the continuum hypothesis is quite unlikely to address
these real problems.

As neat as the developments you so nicely outline are, one has the
following problem.

1. Appropriate large cardinal axioms are shown to imply PD, which in turn
answers just about any question of a set theoretic nature about projective
sets.
2. And those large cardinal axioms are, in an appropriate sense, required
to get PD.
3. However, those large cardinal axioms are not required to settle PD. In
fact, the axiom of constructibility already also settles PD - negatively.
4. So a mathematician concerned with foundational issues in the sense
implicit in this posting can still choose between, say,
	i) ignoring projective sets (beyond Borel sets, and especially
beyond analytic sets) entirely;
	ii) accepting these large cardinals and thereby gaining PD;
	iii) accepting the axiom of constructibility, thereby gaining notPD;
	iv) lapse into a kind of formalism, where whatever implies whatever
is "OK by me."

Of these, most would find i) the most attractive. Getting them truly
involved in large cardinals and the philsophical, foundational, and
mathematical issues surrounding them, has to be done with more finesse.

Of course, there can be no doubt that D.A. Martin and John Steel are
thoughtful and impressive people. For example, they are both on the FOM!





More information about the FOM mailing list