[FOM] New Umbrella?/big picture
spector at alum.mit.edu
Wed Nov 5 19:07:32 EST 2014
Harvey Friedman wrote:
> There is one quite friendly modification of the usual set theoretic
> orthodoxy of ZFC and its principal fragments, and that is Flat
> With regard to the last of these links, I think Leibniz is credited
> for the general principle
> x = y if and only if for all unary predicates P, P(x) iff P(y)
> whereby giving a way of defining equality in practically any context
> (supporting general predication).
Leibniz' principle, in this context, appears to be essentially a minimizing principle, making the
mathematical universe as small as possible. ("The universe is so small that there can be no two
distinct objects that look alike.")
This is in contrast to the maximizing principle which is a prime motivation of many large cardinal
axioms -- saying that the mathematical universe is so very large that many objects in it are
indistinguishable from one another (indistinguishable according to some specific criterion, of
course, depending on the large cardinal axiom).
However, I'd like to throw out a different way of viewing Leibniz's principle that just might make
it a maximizing principle after all, in an approach that may be more in line with Harvey's Flat
The traditional view involves taking a specific collection of predicates (formulas in some language)
and then saying that the universe of objects is so large that there are objects that are
indistinguishable relative to that collection of predicates. So we're starting with a fixed
collection of predicates and making sure the universe of objects is large relative to that
collection of predicates.
On the other hand, what happens if one starts with the collection of objects and then looks at
predicates or relations as first-class mathematical objects, not as extensions of formulas in some
language? One might then say that to maximize the mathematical universe, one would want the
collection of predicates to be so very large that, given any two distinct objects, there's a
predicate that distinguishes between those two objects.
Let's continue in this vein, carrying the line of reasoning to its natural conclusion. If
understanding the type-0 objects requires type-1 relations as a separate sort, it would seem that
understanding the type-1 relations would require a third sort, that of type-2 relations (predicates
on the collection of type-1 relations, rather than predicates on objects). Leibniz' principle would
then suggest that there are so many type-2 relations that, given any two distinct type-1 relations,
there's a type-2 relation that distinguishes between those two type-1 relations.
If we do this, we wouldn't stop at type-2 relations, of course. As we added in type-3 relations,
type-4 relations, etc., we would seem to find ourselves reconstructing at least Russell's theory of
types. Eventually we'd build up to Z and then ZF, on the principle that one would want to iterate
the type hierarchy into the transfinite, so that we have as many type levels as possible (maximizing
the universe again).
For what it's worth, the name "Flat" Foundations may no longer be appropriate, since the structure
is now tiered rather than flat.
I'd be interested in hearing any thoughts on this. It seems to develop the Flat Foundations idea in
a natural way -- but at the same time this extension to more than two sorts appears to obviate the
purpose of Flat Foundations, since we're reconstructing what has become the traditional mathematical
universe, tiered by rank.
More information about the FOM