[FOM] New Umbrella?/big picture

Mitchell Spector spector at alum.mit.edu
Fri Nov 7 02:40:29 EST 2014


Thanks for your comments, Charlie.

Perhaps I should clarify that I wasn't really intending to analyze Leibniz' approach. I was thinking 
about the equality principle (or perhaps more accurately, the definition of equality) which Friedman 
enunciated in connection with his Flat Foundations -- a principle which, as Harvey pointed out, can 
arguably be traced back to Leibniz.

My interest here is in whether we want to establish a large mathematical universe (on the principle 
that every possible abstract pattern should be included) or a small mathematical universe (on the 
principle that the universe should be categorical or uniquely determined, to the extent that this is 
possible), and also in whether any particular foundational approach can correctly be characterized 
as a realization of a large-universe ideal or a small-universe ideal.


ZF (optionally with large cardinal axioms) was intended to be a large-universe model, although this 
has turned out to be successful only with respect to the height of the universe.  The method of 
forcing shows that ZF is not in any sense a large-universe model with respect to the width of the 
universe.  (Moreover, the prevalence of L-like class models in modern set theory shows the practical 
advantages that can be obtained with small universes, even if just as constructs within a large 
universe.)


Mitchell



Charlie wrote:
>    See interspersed comments below:
>
> On Nov 5, 2014, at 4:07 PM, Mitchell Spector <spector at alum.mit.edu <mailto:spector at alum.mit.edu>> wrote:
>
>> Harvey Friedman wrote:
>>> ...
>>> There is one quite friendly modification of the usual set theoretic
>>> orthodoxy of ZFC and its principal fragments, and that is Flat
>>> Foundations:
>>>
>>> http://www.cs.nyu.edu/pipermail/fom/2014-October/018337.html
>>> http://www.cs.nyu.edu/pipermail/fom/2014-October/018340.html
>>> http://www.cs.nyu.edu/pipermail/fom/2014-October/018344.html
>>>
>>> With regard to the last of these links, I think Leibniz is credited
>>> for the general principle
>>>
>>> x = y if and only if for all unary predicates P, P(x) iff P(y)
>
>     Leibniz is truly complex and writes his so-called Identity of Indiscernibles
> principle in different places frequently using different terminology.  I am unaware,
> however, of his ever referring to “unary predicates”.  To me, the closest he comes
> to expressing this is by referring to “properties,” but how are they to be explicated?
>
> Admittedly, he does seem to think all individual concepts (i.e., monads) are both
> distinguished by a single property which also (somehow) “reflects” all properties
> of all other objects (over all time) in the same world (thereby creating an
> equivalence class of worlds in which every element belongs to a single partition.
> God seems to select out a given representative of each world, then decides which
> one reflects the “best possible world,” which is often explicated in terms of
> “plentitude”. <— Maybe this is suggestive mathematically (??)
>
>
>
>
>>>
>>> whereby giving a way of defining equality in practically any context
>>> (supporting general predication).
>>> ...
>>
>>
>>
>> Leibniz' principle, in this context, appears to be essentially a minimizing principle, making the
>> mathematical universe as small as possible.  ("The universe is so small that there can be no two
>> distinct objects that look alike.”)
>
>     This point seems possibly correct (I’m hedging because Leibniz is complicated), though Mates
> says for Leibniz "[t]o say that x is the same as y does not /mean/ that they fall under the same
> concepts, but the principle guarantees that (fortunately for us) if x is different from y, then it
> is at least in principle possible for some mind to tell them apart.”
>
>
>> This is in contrast to the maximizing principle which is a prime motivation of many large cardinal
>> axioms -- saying that the mathematical universe is so very large that many objects in it are
>> indistinguishable from one another (indistinguishable according to some specific criterion, of
>> course, depending on the large cardinal axiom).
>>
>>
>>
>> However, I'd like to throw out a different way of viewing Leibniz's principle that just might make
>> it a maximizing principle after all, in an approach that may be more in line with Harvey's Flat
>> Foundations.
>>
>>
>>
>> The traditional view involves taking a specific collection of predicates (formulas in some
>> language) and then saying that the universe of objects is so large that there are objects that are
>> indistinguishable relative to that collection of predicates.  So we're starting with a fixed
>> collection of predicates and making sure the universe of objects is large relative to that
>> collection of predicates.
>>
>> On the other hand, what happens if one starts with the collection of objects and then looks at
>> predicates or relations as first-class mathematical objects, not as extensions of formulas in some
>> language?  One might then say that to maximize the mathematical universe, one would want the
>> collection of predicates to be so very large that, given any two distinct objects, there's a
>> predicate that distinguishes between those two objects.
>>
>>
>>
>> Let's continue in this vein, carrying the line of reasoning to its natural conclusion. If
>> understanding the type-0 objects requires type-1 relations as a separate sort, it would seem that
>> understanding the type-1 relations would require a third sort, that of type-2 relations
>> (predicates on the collection of type-1 relations, rather than predicates on objects).  Leibniz'
>> principle would then suggest that there are so many type-2 relations that, given any two distinct
>> type-1 relations, there's a type-2 relation that distinguishes between those two type-1 relations.
>>
>> If we do this, we wouldn't stop at type-2 relations, of course.  As we added in type-3 relations,
>> type-4 relations, etc., we would seem to find ourselves reconstructing at least Russell's theory
>> of types.  Eventually we'd build up to Z and then ZF, on the principle that one would want to
>> iterate the type hierarchy into the transfinite, so that we have as many type levels as possible
>> (maximizing the universe again).
>>
>>
>>
>> For what it's worth, the name "Flat" Foundations may no longer be appropriate, since the structure
>> is now tiered rather than flat.
>>
>> I'd be interested in hearing any thoughts on this.  It seems to develop the Flat Foundations idea
>> in a natural way -- but at the same time this extension to more than two sorts appears to obviate
>> the purpose of Flat Foundations, since we're reconstructing what has become the traditional
>> mathematical universe, tiered by rank.
>>
>>
>> Mitchell Spector
>> _______________________________________________
>> FOM mailing list
>> FOM at cs.nyu.edu <mailto:FOM at cs.nyu.edu>
>> http://www.cs.nyu.edu/mailman/listinfo/fom
>


More information about the FOM mailing list