[FOM] formal proofs
Dominic Mulligan
dominic.p.mulligan at googlemail.com
Fri Oct 24 04:50:31 EDT 2014
Hmm. Paraphrased, isn't "Roberts' Gauntlet" asking a group of people with
no interest of particular expertise in the subject of homotopy theory
(Mizar users, who as Roberts points out, in 40 years of continuous
development of the tool have not once aimed to formalise homotopy theory
--- giving as good an indication as any to where their interests and
knowledge lie) to compete in a timed race against experts in the field in
formalising a major result? This is being interpreted as a serious
challenge and something we can draw conclusions from ... why exactly?
On 23 October 2014 19:31, David Posner <dposner at sbcglobal.net> wrote:
> Thank you for your response David. Needless to say most of your message
> is quite beyond me. I have been able to make reasonably good sense about
> what used to be called "Categorical Logic" from some of the papers in (the
> now ancient) Handbook of Math Logic. Michael Fourman's paper on the "Logic
> of Topoi" was particularly helpful. The value of the papers for me of
> course is that they were written for Logicians who are not experts in CT.
> It would help immensely in fostering communication if someone in the HoTT
> world were to update these papers to reflect the current state of
> categorical foundations.
>
>
> On Wednesday, October 22, 2014 4:41 PM, David Roberts <
> david.roberts at adelaide.edu.au> wrote:
>
>
> There are *models* of HoTT that could be construed as using ZFC as
> foundations: the simplicial sets model, for one. People are working on
> a cubical sets version too, in a constructive framework. But these are
> merely models of HoTT.
>
> The objects of HoTT are *not* sets, they are homotopy types. Homotopy
> types are objects of a large category that is a localisation of the
> (large) category of simplicial sets. Simplicial sets are certain
> functors Delta^op ---> Set, or equivalently sequences of sets
> X_0,X_1,X_2 and a whole collection of maps d_j:X_i --> X_{i-1} and
> s_j:X_i --> X_{i+1} satisfying a collection of equations. A
> localisation of a category is another category in which certain
> morphisms of the first category have been formally inverted to become
> isomorphisms, and is the smallest category with this property (all
> these are large categories, so by 'smallest' I mean there is a
> universal property). It is a theorem that the category of homotopy
> types cannot be represented by some category of sets with extra
> structure and morphisms respecting that structure. All of these steps
> are highly nontrivial mathematics, though HoTT bakes in all this
> structure from its axioms.
>
> Notice that ordinary sets are examples of degenerate homotopy types,
> and isomorphisms in the category of homotopy types between sets so
> described are exactly isomorphisms of sets in the usual sets. So the
> category of sets embeds in the category of homotopy types.
>
> The above description is a 'lie to children'. It relates as much to
> HoTT as calling ZFC a theory of things with things in them.
>
> Best wishes, and good luck,
>
> David
>
>
>
>
>
>
> On 23 October 2014 04:06, David Posner <dposner at sbcglobal.net> wrote:
> > Professor Schreiber
> > Surely, if HoTT mathematics cannot be formalized in set theory then it
> must
> > be possible to give at least a high level explanation of why that doesn't
> > require diving into a morass of abstractions. Here's a possible starting
> > point. My hopelessly naive reasoning on why I believe mathematics can be
> > formalized in set theory goes like this. Standard mathematical theories
> are
> > ultimately expressed (or at least expressible) in a standard logic of one
> > kind or another. The semantics of standard logics are defined in set
> > theoretic terms a la Tarski. (domains and relations and what not.) So
> any
> > standard mathematical theorem can be viewed as a statement about sets and
> > formalized in set theory. (That may not be the simplest way to
> understand
> > the theorem or formalize the proof but in theory it is possible.)
> >
> > So the statement that there are mathematical theorems for which set
> > theoretic formalizations cannot or even might not exist would imply that
> > these theorems and theories (in particular parts of HoTT) are expressed
> in a
> > non-standard logic or employ a non-standard semantics or both. Is that
> in
> > fact the case? If not, where has my naive reasoning gone wrong? If so,
> > then can you say something meaningful about where the logic or the
> semantics
> > deviates from the usual?
> > On Tuesday, October 21, 2014 3:46 PM, "croux at andrew.cmu.edu"
> > <croux at andrew.cmu.edu> wrote:
> >
> >
> >> Thanks for this informative reply. It looks like however impressive what
> > we have is, there is still a lot of room for something much better in the
> > way of absolute rigor.
> >>
> >> I haven't really worked on this, but have the ambition to do so in the
> > following way. Make high level design of a "dream system" from
> >> scratch, first paying no attention to the present existing
> >> developments. This will surely lead to a number of interesting and
> > perhaps clean theoretical questions and theorems. Then present it to
> > experts in the existing practical systems, who will be able to readily
> > compare it to reality and criticize.
> >>
> >
> > I would be wary of this kind of approach, as it is littered with the
> > corpses of many ill-conceived proof checkers.
> > One big issue is that what seems conceptually simple and what is
> > simple to implement as a proof checker are somewhat different things.
> >
> > For instance, what foundational system would dream of having a complex
> > module system?
> > It seems redundant and wasteful. However, any hope of having a practical
> > system
> > needs to address this question.
> >
> >
> >> We can repeat the process using T1 and L1 again. But it would seem that
> > for this purpose, we may be able to lower the level of T1 and of L1,
> which
> > may be a good idea.
> >>
> >> Then we continue this process for many steps. Now are we gaining
> > anything by repetition here? We would if we actually get shorter and
> > shorter proofs in weaker and weaker L's and weaker and weaker T's. Of
> > course, at some point, we cannot continue to be reducing. Is there a
> > natural limit to such a procedure?
> >
> > A natural limit is PRA, but you can probably get away with less
> (typically
> > admitting
> > only polynomial computable functions). Repetition is not really required:
> > the main potential for errors is by far in the implementation details,
> and
> > this is not
> > really a foundational issue, but more of a problem in computer science
> and
> > software
> > engineering. However, computer scientists have been verifying programs
> for
> > decades!
> >
> > There have actually been a number of contributions in the area of
> verified
> > proof-checkers:
> >
> > http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.57.4582
> > http://www.cl.cam.ac.uk/~jrh13/papers/holhol.html
> > http://www.sciencedirect.com/science/article/pii/S157106611200028X
> > http://www.cs.utexas.edu/users/jared/milawa/Web/
> >
> http://dl.acm.org/citation.cfm?id=2103723&dl=ACM&coll=DL&CFID=587690589&CFTOKEN=49655975
> >
> >
> >> But we need to start somewhere to even define the problem. A purist
> > starting point would be primitive ZFC, where we have a claim that a given
> > file constitutes an absolutely correct proof of A in ZFC. Of course,
> since
> > we are working with primitive ZFC (no sugar), no human is going to even
> > look at this monstrosity. There is the problem of even getting a real
> life
> > math statement into primitive notation
> >> absolutely correctly. But I would like to separate off that question,
> > and take this now for granted.
> >>
> >
> > This is the real challenge. The mental abstractions humans use when
> > stating and proving
> > theorems are extremely hard to capture, and I think this represents the
> > longstanding
> > open problem of computer-assisted verification.
> >
> > More realistically, I think we should investigate manners of capturing
> > "field-specific" knowledge,
> > notations and intuition in a way that can be mechanized. It's hard for me
> > to think of examples,
> > but "working in the group of endomorphisms" in group theory comes to
> mind,
> > as well as "passing to
> > the dual" in linear algebra or simply just "swapping integrals" in
> > calculus (or swapping sums, or swapping
> > differentials and integrals, etc).
> >
> > Just as there is no "magic bullet" in software engineering, it seems
> > reasonable that there is no magical
> > solution to the problem of mechanizing mathematics, but we have already
> > gone from "unthinkable" to
> > "possible with a massive amount of time and effort" by incremental
> > improvements.
> >
> >
> >> I find this whole approach rather slippery in terms of what the
> >> details would look like, and whether human imperfections are creeping in
> > in subtle ways. I need to think more deeply about this.
> >
> > I'm more worried about human leaps of brilliance :)
> >
> > Best,
> >
> > Cody
> >
> >
> >
> >
> >
> > _______________________________________________
> > FOM mailing list
> > FOM at cs.nyu.edu
> > http://www.cs.nyu.edu/mailman/listinfo/fom
> >
> >
> >
> > _______________________________________________
> > FOM mailing list
> > FOM at cs.nyu.edu
> > http://www.cs.nyu.edu/mailman/listinfo/fom
>
> >
>
>
>
> --
> Dr David Roberts
> Research Associate
> School of Mathematical Sciences
> University of Adelaide
> SA 5005
> AUSTRALIA
>
>
>
>
> _______________________________________________
> FOM mailing list
> FOM at cs.nyu.edu
> http://www.cs.nyu.edu/mailman/listinfo/fom
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: </pipermail/fom/attachments/20141024/135d7edd/attachment.html>
More information about the FOM
mailing list