[FOM] role of large cardinals

Harvey Friedman hmflogic at gmail.com
Thu Sep 22 01:11:32 EDT 2016

```On Wed, Sep 21, 2016 at 10:08 PM,  <meskew at math.uci.edu> wrote:
> I recently wrote the following paragraph-fragment.  I would appreciate any
> critiques of the assertions, especially if you disagree with the last
> thing starting with "the fact that..."
>
> In contemporary logic, there is a wide-ranging consensus that the
> traditional large cardinal axioms are the appropriate measuring-stick for
> gauging the logical strength and showing the consistency of any
> mathematical statement.  The main reasons for this are their mutual
> compatibility, their success in the role so far, and the fact that there
> is no known example of a possibly-consistent hypothesis whose strength can
> be shown to transcend the large cardinal notions.
>
For mathematical statements provable in ZFC, large cardinal hypotheses
(I used that rather than "axioms") are too powerful to "gauge the
logical strength".

The most common gauge for strong statements provable in ZFC is,
roughly, the length of the cumulative hierarchy used. I.e., the
(possibly transfinite) number of iterations of the power set operation
used.

The first omega levels constitute the finite part of the cumulative
hierarchy.The first omega + 1 levels of the cumulative hierarchy
correspond to so called Z_2 = second order arithmetic (as a two sorted
first order theory).

The highly developed gauge for moderately strong theorems of Z_2 is
through Reverse Mathematics. I.e., for theorems of Z_2 not provable in
RCA_0, my base theory for RM. See my
#1. Also Simpson's SOSOA book, and also many accounts on the Internet.

Within RCA_0 and with infinite objects, the proper gauges are somewhat
unsettled, but there is a kind of mathematical statement here worth
mentioning. These are Pi11 statements such as Kruskal's Theorem and
the Graph Minor Theorem. There the fruitful measure of logical
strength is by a proof theoretic ordinal - although there are usually
good formal system measures as well.

For finite mathematical statements, the proper context is usually PA
and its fragments. Equivalently, one can use ZF with Infinity replaced
by "not Infinity" and its fragments.

There are some choice-violating versions of embedding hypotheses that
transcend the apparently-choice-compatible embedding (large cardinal)
hypotheses. So the conventional set theorist would not consider, e.g.,
j:V into V over NBG to be a "large cardinal hypothesis" since it is
incompatible with AxC. And it is stronger than, e.g., j:V(kappa + 1)
into V(kappa + 1). These choice-violating versions are considered to
be "possibly consistent".

NOW all of the above refers to the normal situations. By this I mean
that the theorem in question is generally proved with the natural
axioms that surround the language in which the theorem is expressed.

We now have examples in the language of Z_2 where the proof
necessarily uses iterated power sets and even large cardinal
hypotheses. Even more abnormally, we have now increasingly compelling
examples of implicitly and explicitly Pi01 sentences where the proof
necessarily uses large cardinals. This is not (yet) the normal
situation, and the normal situation is so rich from the point of view
of analyzing logical strength, that it is very much being productively
pursued by a number of people. Of course, there is the (my)
expectation that the necessary use of large cardinals for compelling
discrete and continuous mathematical investigations will become
normal. E.g., the Continuation/Emulation idea might prove uniformly
productive and illuminating when properly applied in a huge range of
compelling contexts.