[FOM] Pluralistic Foundational Crisis?/set theory

Harvey Friedman hmflogic at gmail.com
Sat Apr 2 02:28:21 EDT 2016


There is a Symposium on Set Theoretic Pluralism announced on the FOM
http://www.cs.nyu.edu/pipermail/fom/2016-April/019650.html

The in depth scientific announcement is rather detailed and is at
https://sites.google.com/site/pluralset/context-aims-of-the-network

I give my own perspective on this by copying the body of
https://sites.google.com/site/pluralset/context-aims-of-the-network
and interleaving comments.

I would hope that this will generate some in depth discussion.

********************

> Set theory is in the throes of a foundational crisis, the results of
> which may radically alter our understanding of the infinite and
> mathematics as a whole.

It is doubtful if the specific "pluralistic crisis" being discussed
with alter our understanding of the kind of mathematics that dominates
mathematical practice for decades, and is expected to dominate
mathematical practice for the foreseeable future. Working
mathematicians have a very limited connection to set theory, using
just enough set theory to provide a convenient underpinning for their
real mathematical interests. When the set theory takes a life of its
own and ceases to be convenient for their intended mathematical
purposes, they seek to avoid it. We obviously cannot talk about
everybody, but rather overwhelming dominating modus operandi.

Nevertheless, if one adopts the viewpoint that the most, or at least
very general concepts of set are of intrinsic importance and worth
investigating in their own right, then one can come to be concerned
with this "pluralistic crisis" even if it is intrinsically completely
divorced from and totally independent from mathematical practice.

In fact, there is a different kind of "foundational conundrum" in set
theory which does promise to have a substantial and potentially
enormous impact on mathematics as a whole. I don't think that those
who talk about Set Theoretic Pluralism here and elsewhere have this
different kind of "foundational conundrum" in mind, although one could
merge the two under some wider banner. I will say a bit more about
this later on here.

> Over the course of the twentieth century, set theory has become the de
> facto foundation for mathematics. It plays this role in two ways.
> First, it provides an ontology of spaces and objects which can
> represent the subject matter of contemporary mathematics. Second, it
> provides a lever via which the problems of contemporary mathematics
> may be solved. On the first front, set theory has been a success.

This is not controversial.

> On
> the second, however, significant problems have emerged. The most
> dramatic example of this is the continuum hypothesis (CH). While the
> large cardinal programme initially appeared to promise a means of
> solving these kinds of problems, it is now well-known that CH is
> independent of anything we could foreseeably think of as a large
> cardinal assumption.

It would be interesting to see a suitably general transparent notion
of "large cardinal axiom" for which we can establish that no such can
prove or refute CH over ZFC.

But CH is in the family of statements whose abstract set theoretic
intensity is way way way higher than what the working mathematician is
focused on or feels that they really need. See
http://www.cs.nyu.edu/pipermail/fom/2016-March/019584.html So even if
the large cardinal hypotheses were to settle CH, this would not have
been a compelling reason for the mathematical community to enlarge the
usual ZFC foundations with large cardinal hypotheses.

> In the last few years and in response to these epistemic challenges a
> number of new perspectives on set theory have emerged which attempt to
> engage with these problems by avoiding the fixed ontology of the
> cumulative hierarchy and replacing it with a plurality of universes.
> For multiverse approaches, a problem like CH is treated as misleading
> way of asking which universe we happen to be working in.

However, it should be noted that this does not seem to apply to
problems unlike CH in that they lie within the realm of concrete
mathematics, of the kind that mathematicians are focused on. In
particular, not for sentences of arithmetic or sentences of low levels
of the analytic hierarchy.

> For example,
> Joel Hamkins has proposed that set theory should be construed in
> better faith with its practice.

Perhaps foundations of set theory should be construed in better faith
with mathematical practice?

> In accord with contemporary set
> theory's fascination with models, Hamkins suggests that the models
> themselves should be added its ontology (Hamkins, 2012).

This reaction to different models satisfying different major set
theoretic statements is to accept and study different models, while
generally limiting any value judgments as to the appropriateness of
these models.

But there is another reaction to different models satisfying different
major set theoretic statements. This is to focus on one particular
model. A variant would be to focus on a limited group of models.

We can go around in circles and say that we should pick (V,epsilon) as
the one particular model. However, the current view based on
experience is that this model is underdetermined. Of course, the
diehards will in fact take the position that there is exactly one
(V,epsilon) by definition. And therefore there is only one truth value
of CH in that model. We just haven't yet figured out what it is.

Of course, the pluralistic view is that (V,epsilon) is in fact
underdetermined, and is not really a single model. So let's take this
view.

An obvious move is to focus on one particular model, and if we are
going to focus on one particular model, the most obvious focus would
be on (L,epsilon).

Of course (L,epsilon) has been a particularly unpopular model
especially from those inclined to think that there is only one
(V,epsilon). Nevertheless, it has some tremendous advantages.

First and foremost, almost every single one of the known natural set
theoretically intense statements about V(omega + omega) and many
others, have a known truth value in (L,epsilon).

So on purely mathematical set theory grounds, (L,epsilon) solves all
of the issues, period. So why is (L,epsilon) such an unpopular choice
of model to focus on?

The usual reason given is that in (L,epsilon), there are no measurable
cardinals (and somewhat weaker). Taking this way down in abstraction
level, down to the relatively concrete, but still way way higher than
the overwhelming focus of mathematical practice, (L,epsilon) satisfies
the negation of some "good" assertions. Most notably, perhaps, is "any
two analytic non Borel sets of reals are Borel isomorphic". This
pleasing assertion is known to be false in (L,epsilon), but is
provable using the existence of a measurable cardinal.

I'm not convinced of the strength of this argument rejecting a focus
on (L,epsilon). It has too much the flavor of "gee, I'll lose all this
beautiful set theory that I've grown up with".

Another argument in favor of focusing on (L,epsilon) is that
(L,epsilon) is that it is the only "tangible" model containing all
ordinals. In fact, it is the minimum model containing all ordinals, in
the appropriate sense.

It is pretty clear that no forcing extension of (L,epsilon) is going
to be in any reasonable sense "tangible". That is, even if there are
any. If we are in (L,epsilon) then there won't be any. Some
interesting technical issues arise here in making clear sense of this
(I think mostly resolved), but the main point is solid - there are no
tangible forcing extensions of (L,epsilon). One can only look to
studying families, abandoning the very idea of focusing on one model.

Granted there is a level of tangibility around in models like
(L(mu),epsilon), where mu is a kappa additive measure on a measurable
cardinal kappa. But still there is the question of where that kappa
and where that measure comes from. How do we determine whether a
subset of kappa is to have measure 0 or have measure 1? It is my
impression that the attempts to deal with this issue are not
satisfactory. You seem to have to say that some involved process
actually miraculously works to get this.

We an also go further and consider the notion of ordinal as
underdetermined just as we have considered the notion of V as
underdetermined. Then we arrive at the so called minimum transitive
model, which is a countable (L(lambda),epsilon). Enough of this
discussion...

> John Steel
> takes the impressive impact of the large cardinal programme on
> descriptive set theory and turns our ordinary understanding of sets on
> its head. Rather than thinking of set theory as describing some
> pre-existing structure in which mathematics can be seen to take place,
> we should rather see it as a congenial scaffolding through which
> further concrete mathematics can be interpreted (Steel, 2012).

This use of the word "concrete" must be distinguished from normal
mathematical usage. I would say that the Borel measurable world (in
Polish spaces) is at the outer limits of anything that would even
remotely be regarded as concrete from the point of view of
mathematical practice. But I need to take a careful look at Steel,
2012 to further address Steel's viewpoint.

Let me stop here in this posting, with the intention of continuing in
this manner through the rest of it, copied below:

> Finally, Friedman’s hyperuniverse programme attempts to combine
> features of both the universe and multiverse perspectives. By tracking
> first order properties of universes in multiverses constrained by
> natural principles, Friedman aims to discover new axiom candidates to
> characterise the universe of sets V. Väänänen uses his dependence
> logic, in particular the concept of team semantics, to make sense of
> the multiverse idea. His starting point is general first order logic
> with multiverse structures and he applies this to set theory.
>
> Each of these pictures admits a kind of pluralistic ontology and
> indeterminacy into foundations. The move is controversial. Hugh Woodin
> has argued that the kind of generic multiverse offered by Steel
> reduces set theory to a species of formalism that betrays its
> Cantorian roots (Woodin, 2012). Moreover, Tony Martin has offered a
> naïve re-working of Zermelo's categoricity argument to claim that the
> indeterminacy revealed by CH is of a merely epistemic nature and thus,
> that the metaphysical re-imaginings of Hamkins and Steel are
> unwarranted (Martin, 2001; Zermelo, 1976). In a related vein, a
> criticism of the pluralist account of foundations is given by Väänänen
> in his comparison of the second order logic and set theory approaches
> (Väänänen, 2012).
>
> Beyond the mathematical challenges involved in addressing these
> programmes, there are significant overlaps with recent work in
> mainstream analytic philosophy, particularly in metaphysics and
> philosophical logic. A key problem in metaontology is Putnam’s
> paradox, which is a generalisation of Skolem’s paradox to language and
> semantics at large. Using model theoretic techniques, it is argued
> that we are caught in a regress of theory augmentation whenever we
> seek to give a full account of the meaning of our expressions. Without
> such an account, we lose the ability to anchor our ontology to our
> language. A response emerges with Lewis and has been developed by
> Sider, Schaffer and Williams. They argue that there is a privileged
> language which carves nature at it joins and that this is the goal of
> our best theories. For multiverse debates, these approaches are
> particularly useful for the one-universe adherent. Related work by
> Kennedy (2013) suggests a pluralistic approach involving generalised
> constructibility and more widely the concept of "formalism freeness",
> and its dual, the concept of the entanglement of a semantically given
> object with its underlying formalism. On the other hand, there has
> also been recent work into the identification of substantive debates.
> Stemming from Carnap (1956) and Ryle (1954) – and emerging more
> recently with Thomason (2009), Chalmers (2011) and Sider (2011), it is
> argued that some metaphysical debates are merely verbal. Such debates
> are pointless as although the parties to the debate are in conflict
> nothing substantive hangs on the result. With multiverse debates,
> these approaches provide a means of arguing that some questions are
> meaningless.
>
> With regard to philosophical logic, a significant amount of recent
> activity has been devoted to problems of indeterminacy; in particular,
> problems caused by vagueness and the liar paradox. A prominent
> response to these problems is known as supervaluation. Observing that
> indeterminacy results where there are different possibilities none of
> which is determined as correct, supervaluation tells us that the
> determinate propositions are those which are true regardless of which
> possibility we select. In the context of the multiverse, a proposition
> is meaningful if it is true in every universe. This, however, is just
> one of many different approaches to indeterminacy which include
> epistemicism, fuzzy logic, non-standard consequence relations and
> paraconsistency (Williamson 2008). It has been observed that any
> approach to indeterminacy developed in one area can be generalised
> into an analogous response in another. This raises interesting
> questions about the applicability of a wider variety of techniques in
> philosophical logic to the multiverse.


More information about the FOM mailing list