[FOM] Fwd: A request--Reply to Mario Carneiro

Tennant, Neil tennant.9 at osu.edu
Sat Sep 5 11:14:59 EDT 2015


Mario Carneiro nails it when he writes

"Given a derivation of # under the assumption A, we are allowed to conclude A => B, that is, if A |- # then |- (A => B). Thus "if (not P and P) then Q" is derivable, in the form "|- (P /\ ~P) => Q". What is *not* derivable is a derivation of Q from the assumption P /\ ~P, that is "(P /\ ~P) |- Q" or "P, ~P |- Q". This is what Tennant is calling EFQ, while others have been looking at the former as a closed-form EFQ."

So I thank him for an unusually intelligent, reflective comment on what has transpired thus far about the unbearable ghastliness of EFQ.  :-)

This reply is intended to answer his follow-up question why

"... one cannot apply modus ponens to the implication to get true EFQ. For example:

             ___         ___(1)
              ~A           A
              __________
                       #
       ___       ____(1)
         A         A->B
       ____________
              B
Here we are taking A and ~A as assumptions, and deriving B, i.e. "A, ~A |- B" which is EFQ if anything is. How does modus ponens / implies-elimination interact with the lack of cut?"

Remember that all core proofs are in normal form. Mario's proof just displayed above is not in normal form. For the sentence A->B stands as the conclusion of ->I but also as the major premise of ->E (the final step of modus ponens, which is the serial form of the elimination rule for ->).

In order to find the 'real' core proof resulting from this would-be accumulation of

             ___         ___(1)
              ~A           A
              __________
                       #
                    ____(1)
                    A->B

'on top of'

                A         A->B
               __________
                      B

one would have to normalize Mario's proof. The result is---wait for it!---

             ~A           A
              __________     ,
                       #

which is a core proof of a proper *subsequent* of the would-be result ~A,A:B of the attempted accumulation.

Core Logic does not allow *Unrestricted Accumulation of Proofs*---the natural-deduction analog of not allowing *Unrestricted Cut* in sequent calculus. Whenever (in natural deduction) a would-be accumulation-of-proofs is proffered as a proof, the Core Logician says "Wait! Normalize the would-be result first, in order to determine what it is that you have really 'proved'. You may well find yourself making important epistemic gains as a result."

Mario's proof is a case in point.  It is very like the situation with Lewis's notorious 'proof' of ~A,A:B that results from accumulating the little proof (call it P)

      A
   _____
    AvB

with (whatever proof one has of) Disjunctive Syllogism:

   AvB   ~A
   _______
        B

The Core logician can prove Disjunctive Syllogism, by the following proof (call it P'):

                          ___(1)
                   ~A    A
                   ______     __(1)
         AvB         #          B
         ________________(1)
                        B

But if one were to try to accumulate the proof P consisting of the single step of vI 'on top of' the major premise AvB for vE within proof P', one would thereby produce an *abnormal* proof, which would not qualify as a core proof. But the Core logician can cater to the underlying motivation thus to accumulate, by calculating the reduct [P,P'] of the two proofs P and P'. The result is the proof---wait for it!---

             ~A           A
              __________
                       #

Epistemic gain again. By thoughtfully reducing, rather than recklessly accumulating, you discover exactly what the core proofs you already have in hand (P and P') really amount to.

These degenerate examples are so degenerate that they might not properly underscore the general point, which I shall therefore stress separately. In general, you might have a core proof P of X:A, and another core proof P' of A,Y:B. In order to discover what this might amount to, do not rush to cut, in the belief that you now have a 'proof' of X,Y:B. Rather, calculate the reduct [P,P']. For this latter will tell you what P and P' together really amount to. You might even discover that the union of consistent X with consistent Y turns out to be inconsistent. But even if that is not the case, [P,P'] will be a proof of A from *some subset of* X,Y. If you don't want to waste the possibly hyperexponential time involved in finding out exactly which subset of X,Y this might be, well, OK. We all have competing demands on our time. But maybe some bright grad student will go on to 'improve' your result (by Cut) of A 'from' X,Y, by deducing A from some *proper* subset of X,Y. And that grad student will subsequently get much more credit, for having produced a logically stronger result. Then you, suitably chagrinned, go back to your computer and compute [P,P']---your 'very own' [P,P'], mind you, that you were too lazy, or otherwise engaged, to find for yourself---and guess what?---the reduct turns out to use only that wretched proper subset of axioms that your grad student is now getting so much credit for having shown to suffice for A.

Take it or leave it; the completion of any search for potential epistemic gain is up to you. I am offering only a provably effective tool, not telling you always to use it!

Neil Tennant
-------------- next part --------------
An HTML attachment was scrubbed...
URL: </pipermail/fom/attachments/20150905/af7ad3eb/attachment-0001.html>


More information about the FOM mailing list