[FOM] New Umbrella?/big picture

Charlie silver_1 at mindspring.com
Fri Nov 7 10:51:24 EST 2014


	    	   You’re quite welcome.  I realize your interest. Leibiniz happens to be one of my hobby-horses. For a long time I thought Leibniz’s complex cosmology was ultimately mathematical.  I’d thought at one time (and sometimes still think) that L’s “compossible worlds” were generated by ultrafilters. And on and on. But if you, Harvey, and others can import L’s principles within Foundations, so much the better.

	Re. large or small universes vis a vis Leibniz, I think Harvey may have something about properties being one-place predicates for Leib.  One way to look at the situation is that every property of every individual may be embodied in a single one-place predicate.  Not only would it identify the individual uniquely, but reflect (somehow) all properties of every other individual in that world (set?)

	To create a “small” mathematical universe, if we were to consider L’s “ “worlds,” though they contain an infinite number of individuals, each individual “reflects everything about that world,” and thus any one individual in that world is a representative of that partition — i.e., that world — among all the other partitions (worlds) in the equivalence class representing all possible worlds.  Leibniz said there are infinitely many possible worlds, though I don’t know “how infinite”.

	 But you can see I’ve imposed notions of my own amid Leibniz’s, especially when I make up the idea of each world determined by a non-principal ultrafilter that allows for infinitely many individuals, but also shows them to be inextricably interrelated.

	 Sorry for the infusion of unrelated philosophy, but maybe Leibniz’s notions can be of small benefit in your arena.  (¿ Maybe you can even sneak in infinitesimals?)

	Even with the benefit of such (Leibnizian-based) speculation, I cannot see how to distinguish a high universe from a thick one. 


	What you write below is way over my head technically, but the notions are highly provocative.  Thanks for your very clear explanation.

Charlie Silver


On Nov 6, 2014, at 11:40 PM, Mitchell Spector <spector at alum.mit.edu> wrote:

> Thanks for your comments, Charlie.
> 
> Perhaps I should clarify that I wasn't really intending to analyze Leibniz' approach. I was thinking about the equality principle (or perhaps more accurately, the definition of equality) which Friedman enunciated in connection with his Flat Foundations -- a principle which, as Harvey pointed out, can arguably be traced back to Leibniz.
> 
> My interest here is in whether we want to establish a large mathematical universe (on the principle that every possible abstract pattern should be included) or a small mathematical universe (on the principle that the universe should be categorical or uniquely determined, to the extent that this is possible), and also in whether any particular foundational approach can correctly be characterized as a realization of a large-universe ideal or a small-universe ideal.
> 
> 
> ZF (optionally with large cardinal axioms) was intended to be a large-universe model, although this has turned out to be successful only with respect to the height of the universe.  The method of forcing shows that ZF is not in any sense a large-universe model with respect to the width of the universe.  (Moreover, the prevalence of L-like class models in modern set theory shows the practical advantages that can be obtained with small universes, even if just as constructs within a large universe.)
> 
> 
> Mitchell
> 
> 
> 
> Charlie wrote:
>>   See interspersed comments below:
>> 
>> On Nov 5, 2014, at 4:07 PM, Mitchell Spector <spector at alum.mit.edu <mailto:spector at alum.mit.edu>> wrote:
>> 
>>> Harvey Friedman wrote:
>>>> ...
>>>> There is one quite friendly modification of the usual set theoretic
>>>> orthodoxy of ZFC and its principal fragments, and that is Flat
>>>> Foundations:
>>>> 
>>>> http://www.cs.nyu.edu/pipermail/fom/2014-October/018337.html
>>>> http://www.cs.nyu.edu/pipermail/fom/2014-October/018340.html
>>>> http://www.cs.nyu.edu/pipermail/fom/2014-October/018344.html
>>>> 
>>>> With regard to the last of these links, I think Leibniz is credited
>>>> for the general principle
>>>> 
>>>> x = y if and only if for all unary predicates P, P(x) iff P(y)
>> 
>>    Leibniz is truly complex and writes his so-called Identity of Indiscernibles
>> principle in different places frequently using different terminology.  I am unaware,
>> however, of his ever referring to “unary predicates”.  To me, the closest he comes
>> to expressing this is by referring to “properties,” but how are they to be explicated?
>> 
>> Admittedly, he does seem to think all individual concepts (i.e., monads) are both
>> distinguished by a single property which also (somehow) “reflects” all properties
>> of all other objects (over all time) in the same world (thereby creating an
>> equivalence class of worlds in which every element belongs to a single partition.
>> God seems to select out a given representative of each world, then decides which
>> one reflects the “best possible world,” which is often explicated in terms of
>> “plentitude”. <— Maybe this is suggestive mathematically (??)
>> 
>> 
>> 
>> 
>>>> 
>>>> whereby giving a way of defining equality in practically any context
>>>> (supporting general predication).
>>>> ...
>>> 
>>> 
>>> 
>>> Leibniz' principle, in this context, appears to be essentially a minimizing principle, making the
>>> mathematical universe as small as possible.  ("The universe is so small that there can be no two
>>> distinct objects that look alike.”)
>> 
>>    This point seems possibly correct (I’m hedging because Leibniz is complicated), though Mates
>> says for Leibniz "[t]o say that x is the same as y does not /mean/ that they fall under the same
>> concepts, but the principle guarantees that (fortunately for us) if x is different from y, then it
>> is at least in principle possible for some mind to tell them apart.”
>> 
>> 
>>> This is in contrast to the maximizing principle which is a prime motivation of many large cardinal
>>> axioms -- saying that the mathematical universe is so very large that many objects in it are
>>> indistinguishable from one another (indistinguishable according to some specific criterion, of
>>> course, depending on the large cardinal axiom).
>>> 
>>> 
>>> 
>>> However, I'd like to throw out a different way of viewing Leibniz's principle that just might make
>>> it a maximizing principle after all, in an approach that may be more in line with Harvey's Flat
>>> Foundations.
>>> 
>>> 
>>> 
>>> The traditional view involves taking a specific collection of predicates (formulas in some
>>> language) and then saying that the universe of objects is so large that there are objects that are
>>> indistinguishable relative to that collection of predicates.  So we're starting with a fixed
>>> collection of predicates and making sure the universe of objects is large relative to that
>>> collection of predicates.
>>> 
>>> On the other hand, what happens if one starts with the collection of objects and then looks at
>>> predicates or relations as first-class mathematical objects, not as extensions of formulas in some
>>> language?  One might then say that to maximize the mathematical universe, one would want the
>>> collection of predicates to be so very large that, given any two distinct objects, there's a
>>> predicate that distinguishes between those two objects.
>>> 
>>> 
>>> 
>>> Let's continue in this vein, carrying the line of reasoning to its natural conclusion. If
>>> understanding the type-0 objects requires type-1 relations as a separate sort, it would seem that
>>> understanding the type-1 relations would require a third sort, that of type-2 relations
>>> (predicates on the collection of type-1 relations, rather than predicates on objects).  Leibniz'
>>> principle would then suggest that there are so many type-2 relations that, given any two distinct
>>> type-1 relations, there's a type-2 relation that distinguishes between those two type-1 relations.
>>> 
>>> If we do this, we wouldn't stop at type-2 relations, of course.  As we added in type-3 relations,
>>> type-4 relations, etc., we would seem to find ourselves reconstructing at least Russell's theory
>>> of types.  Eventually we'd build up to Z and then ZF, on the principle that one would want to
>>> iterate the type hierarchy into the transfinite, so that we have as many type levels as possible
>>> (maximizing the universe again).
>>> 
>>> 
>>> 
>>> For what it's worth, the name "Flat" Foundations may no longer be appropriate, since the structure
>>> is now tiered rather than flat.
>>> 
>>> I'd be interested in hearing any thoughts on this.  It seems to develop the Flat Foundations idea
>>> in a natural way -- but at the same time this extension to more than two sorts appears to obviate
>>> the purpose of Flat Foundations, since we're reconstructing what has become the traditional
>>> mathematical universe, tiered by rank.
>>> 
>>> 
>>> Mitchell Spector
>>> _______________________________________________
>>> FOM mailing list
>>> FOM at cs.nyu.edu <mailto:FOM at cs.nyu.edu>
>>> http://www.cs.nyu.edu/mailman/listinfo/fom
>> 

-------------- next part --------------
An HTML attachment was scrubbed...
URL: </pipermail/fom/attachments/20141107/1e5e8ecb/attachment-0001.html>


More information about the FOM mailing list