Universes of definable sets and the axiom of choice

Dmytro Taranovsky dmytro at mit.edu
Mon Mar 16 16:05:08 EDT 2020


The main argument against the axiom of choice (AC) is and has been that 
it is nonconstructive, and that it leads to existence of 'pathological' 
sets that are apparently not definable.  Still, AC has been accepted 
because of its obviousness and its enormous utility in 'making' the set 
universe tractable and well-behaved. There is another argument for AC:  
A mathematician insisting that all sets be definable will end up with a 
model of AC (such as an elementary substructure of HOD) -- or that has 
been the traditional view.

HOD is only one of different notions of the universe of definable sets, 
and intriguingly, some other notions suggest ZF+AD.

I will start with a motivating example; next, make general observations 
on definability; then present a potential construction of natural 
definable universes with large cardinals beyond choice; and finally 
discuss ordinal definability and beyond.

For a motivating example, suppose that L(R) satisfies AD (the axiom of 
determinacy), and R^# exists.  Let r_0 = 0 and r_{j+1} = 
M_omega^#(r_j).  Then the definable sets in (L(R),in,r_1,r_2,...) form 
an elementary substructure of L(R).  Or if we let M = L(R)[G] with G 
generic over L(R) for Add(omega_1,1), then M satisfies ZFC, and the 
transitive extensional collapse of the set of (M,in,r_1,r_2,...) 
definable sets is elementarily embeddable in L(R).  (The transitive 
extensional collapse can be defined as collapse(a) = collapse_a(a) and 
collapse_a(b) = {collapse_a(c): c in (a intersect b)}.)

Up to notational differences, r_{j+1} = theory(L(R),in,r_j,i_0,i_1,...) 
where ordinals i_0,i_1,... are the first omega canonical indiscernibles 
for L(R).  By contemplating L(R), we might see r_1,r_2,... as natural 
additions to the language (L(R),in).  Alternatively, for the above, we 
can let r be any sequence of reals of length omega with M_omega^#(r_j) 
computable from r_{j+1}.

Alternatively, even simpler (if we are content with finite fragments of 
ZF/ZFC), for every statement phi that holds in some L_alpha(R), I think 
we can set r_j=0, and in place of L(R), use L_alpha(R) for the least 
alpha>0 such that L_alpha(R) satisfies phi.  For such alpha (assuming AD 
in L_{alpha+2}(R)), the first order definable in L_{alpha+1}(R) elements 
of L_alpha(R) should form an elementary substructure of L_alpha(R).  
(alpha+1 is sometimes necessary; for example, see "Scales in K(R)" by 
John Steel.)

Going further, while our understanding of uncountable sets is limited, 
we can still make informed guesses about definability (or for a 
formalist, guesses about how we will want it to be) and try to uncover 
the right principles.  Some of my previous writings can be found at:
"Definability and Natural Sets of Real Numbers" 
https://cs.nyu.edu/pipermail/fom/2012-February/016198.html
"On the Nature of Reals" 
https://cs.nyu.edu/pipermail/fom/2016-October/020147.html
"Symmetry and Infinity" 
https://cs.nyu.edu/pipermail/fom/2018-December/021326.html

Existence of a reasonable structure for definable sets depends on a 
sufficiently closed notion of definability.  Closure/symmetry and not 
merely strength is crucial.  For example, it is because of closure and 
symmetry that we can quantify over the totality of definable reals, and 
still end up with definable reals.

Next, definability is extensible.  Suppose that we extend definability 
from L_0 to L_1 (for some L_0 and L_1).  If (per standard platonic 
view), there is a background universe, then the set of L_0 definable 
sets should be (assuming sufficient closure) an elementary substructure 
(for definability levels in L_0) of the set of L_1 definable sets (or 
for a slight weakening of sufficient closure, the elementarity is for a 
definability level slightly below L_0).  However, even without assuming 
metaphysical existence of uncountable sets, we can make sense of 
concepts like the set of all reals as follows.  With some caveats, a 
definability level leads to a universe of definable sets, say V_0 for 
L_0 and V_1 for L_1. Assuming sufficient coherence, L_0 makes sense both 
for V_0 and V_1, and by taking L_0 definitions of sets in V_0 and 
applying them in V_1, we get an L_0-elementary embedding of V_0 into 
V_1.  Thus, while the set of all reals may have additional elements in 
V_1, we have full agreement about its L_0-properties, and similarly for 
other sets.  A related discussion is in my above-mentioned posting 
"Symmetry and Infinity".

If V=HOD, then at sufficient expressiveness, the resulting structures 
satisfy AC and are elementarily embeddable into (V,in).

However, there are good reasons to reject V=HOD.  Intuitively, there is 
no definable way to well-order the reals.  Moreover at definability 
levels that are well-understood, a relation whose restriction to reals 
of definability level X (for typical X) is a well-order requires a 
definability level close to X (and models with simple well-orderings 
(such as L) have simple reals).  For example, under projective 
determinacy (PD), there is no Sigma-1-9 well-ordering of Delta-1-10 
reals.  Also, we can force V=HOD by for example coding V into the 
continuum function, but doing so would break the symmetry we intuitively 
expect of V.  Also, some large cardinal-like symmetry principles 
contradict V=HOD.

Without a definable well-ordering, the set of definable sets might fail 
to be an elementary substructure.  However, we can still take, for 
example, among sets first order definable in V_kappa, the transitive 
extensional collapse S_kappa.  This maps definable sets without 
definable elements into the empty set, and so on.  We do not know much 
about S_kappa, but one optimistic possibility is that for appropriate kappa:
- S_kappa satisfies a large fragment of ZF, including the full 
separation schema.
- All elements of S_kappa are definable in S_kappa.
- We have coherence between different kappa in terms of (say) 
S_{kappa_1} being elementarily embeddable into a rank-initial segment of 
S_{kappa_2}.
- AD holds in S_kappa.
- For some kappa, S_kappa has large cardinals beyond choice such as 
totally Reinhardt cardinals (as they are currently called; note that 
existence of a totally Reinhardt cardinal is a Sigma_2 statement).

Notes:
- It could be that transitive extensional collapse is the wrong way of 
extracting good definable structure.  Even bounded quantifier separation 
is not a given (i.e. there likely are models in which it fails in 
S_kappa).  It might also be that quantification over some levels of 
S_kappa has different expressiveness than for the corresponding levels 
of V_kappa.
- AC should fail in S_kappa if adding more unbounded quantifiers 
increases expressiveness beyond the addition of ordinal parameters.
- Without AC, the intuitive case for AD is very strong.
- Without AC, there are no apparent problems with Reinhardt-like cardinals.

However, if V_kappa satisfies ZFC, then so does S_kappa.  Also, for all 
large enough kappa, S_kappa satisfies "there is a well-ordering of the 
reals".  (On the other hand, it should be consistent that for the least 
kappa such that V_kappa has the same Sigma_10 theory as V, S_kappa 
satisfies "there is no well-ordering of the reals".)

The problem, I submit, is that ordinal definability, while very 
expressive, is not sufficiently closed; it allows defining certain 
constructs without also defining the canonical indiscernibles for those 
constructs.

A key result of descriptive set theory is uniformization.  For 
well-understood expressiveness levels, every relation on the reals can 
be uniformized by a function that is definable at the same or 'slightly' 
higher level.  For example, under PD, we have Pi^1_{2n-1}(r) and 
Sigma^1_{2n}(r) uniformization for all positive integers n and reals r.  
However, with ordinal definability (unless V=HOD), we get a nonempty 
Pi^V_2 set of reals (that is co-countable if |R^{HOD}|=omega) without an 
ordinal definable element.  For the analogous situation for L(R), we get 
definable elements with R^#.

And for HOD, we can extend the language of set theory with the canonical 
indiscernibles for V (which I call omega-reflective cardinals) and 
(likely) define HOD sharp to get definable sets beyond OD.  For details, 
see my paper: "Reflective Cardinals" https://arxiv.org/abs/1203.2270 .

To get sufficient closure relative to itself, the extension would have 
to be done omega times -- thus extending (V,in) with omega new predicate 
symbols -- and if our assumptions are right, the transitive extensional 
collapse of definable sets may be a canonical model for ZF+AD that 
coheres with appropriate S_kappa described above.  I expect that the 
appropriate axioms (leading to a true reasonably complete theory) will 
be amenable to fine-structural analysis, but that is for another day.  
Fine structural models for, say, totally Reinhardt cardinals, would be 
very different from what we have now (and very interesting), but at the 
high level, it would still be a well-understood canonical model that 
appears built level-by-level.

I will close with some (non-exhaustive) possibilities:
- (pessimistic view) set theory is incoherent even at the level of third 
order arithmetic, with basic independent propositions having no 
preferred resolutions.
- (not likely) V=HOD.
- (radical view) the analysis of definable sets (in the above style) 
works out, with the sets satisfying ZF+AD, and this universe (or 
coherent collection of universes) becomes the main subject of infinitary 
mathematics.
- the universe of all sets V (satisfying ZFC) and the canonical 
universes for ZF+AD are both considered important.

Even with V satisfying ZFC but not V=HOD, an approach to V through a 
coherent sequence of universes V_0, V_1, ... is possible but with some 
complications: The canonical definitions for sets in V_0 go slightly 
beyond first order definability in V_0 (unless we choose to limit 
correctness/elementarity; in a way, V_0 is defined as a whole), and 
applying these definitions to V_1 gives non-unique sets (example: a 
well-ordering of V_0 reals can be extended in multiple ways to V_1 
reals), but crucially, we can still have elementary embeddings of V_0 
into V_1 and so on.

Sincerely,
Dmytro Taranovsky
http://web.mit.edu/dmytro/www/main.htm


More information about the FOM mailing list