[FOM] The still unbearable EFQ: Reply to Martin Davis
Tennant, Neil
tennant.9 at osu.edu
Sat Sep 5 10:18:28 EDT 2015
There is the following Methodological Maxim for relevantizing 'at the level of the turnstile':
(MM) No proof of a sequent may contain as a proper subproof any proof of a subsequent of that sequent.
Here is a very simple example. Suppose you are asked to solve the deductive problem
(A&~C)->A, A, ~C ?- A.
An intelligent proof-finder will give the solution
A:A
upon running its first line (in a Prolog program) that says, in effect, "Look to see whether the conclusion is already a premise!". A really dumb proof-finder (for a non-core system) might end up churning out the 'proof' (with parallelized ->-E)
A ~C
______ ___(1)
(A&~C)->A A &~C A
_____________________(1)
A
This is not a core proof, because the proper subproof A at the top is a proof of the sequent A:A, which is a proper subsequent of the sequent (A&~C)->A, A, ~C:A that the 'proof' purports to prove.
(MM) has no effect at all on the completeness of a core logical system. It is in fact incorporated into the formal definition of core proofs, because it expresses in the strongest possible terms the Core logician's aversion to Thinning in all its forms, on the left or on the right.
(MM) is motivated by the commonsensical insight that comes to any computational logician with sufficient experience in seeking to automate the process of proof-finding. If you are posed the deductive task of deducing A from (premises in) X, you will have succeeded as soon as you find either a proof of A from some subset X' of X, or a proof that (some subset X' of) X is inconsistent. The Core logician will not have to do anything more; he will be completely satisfied with a proof of X':A or of X':# as a solution to the deductive problem X?-A. The orthodox logician, on the other hand, for reasons best known to herself, will want to 'nail it', by giving a proof of the sequent X:A *itself*. So, like a fastidious bureaucrat she will happily rubber-stamp with a terminal step of Thinning (on both sides, if necessary), and finally (but only thereafter) pronounce herself satisfied with the result---a 'proof' of X:A. If you think this is an unfair caricature of the orthodox logician, you might object "Oh no, she will be just as happy as the Core logician, should she succeed in proving X':A, for some proper subset X' of X; the set X was, after all, originally given as a set of *available* premises; there is no obligation to use each and every member of X in one's final solution to the deductive problem X?-A." To this, the Core logician has a very disarming reply: "Alright then; let a proof of a sequent of the form X':# also count as a solution to the original deductive problem! For there is no difference in principle between subsetting down on the left and subsetting down on the right. Both of these achievement yield a potentially stronger logical result, don't they? It's what I call epistemic gain.".
(MM) makes for fantastically efficient proof-search in Core Logic because of the following consideration concerning deductive subproblems (which does NOT obtain for orthodox systems): if you have built up a would-be proof (of, say, the deductive problem X?-A) to the point where there is an outstanding subtask of solving (say) the deductive subproblem Y?-B for an embedded subproof, then: should you find a proof P of some subsequent Y':B or Y':# of Y:B, then P can be embedded at that point, and all foreshadowed rule-applications lower down (in the proof-tree generated thus far by your choices of deductive sub-problems for branching upwards) can either be applied as originally intended, *or eschewed* on the grounds that they are actually unnecessary.
Now, Martin Davis wrote:
"Suppose one knows that A and ~B are true and wishes to prove C. A reductio ad absurdam proof might proceed by proving B from A and ~C. The principle involved can be rendered:
if A,~C |- B, then A,~B |- C
Then the special case B=A gives
A,~A |- C
This shows that to eliminate EFQ one must interdict a simple special case of an important general logical principle."
Let us look at this more closely, but in the light of (MM). Martin's special case involves uniformly substituting A for B; so B is atomic. He envisages having some proof
A, ~C
:
B
and then, presumably, embedding it into a longer proof as follows:
___(1)
A, ~C
:
~B B
_________
#
_____(1)
C
whose terminal step is an application of Classical Reductio ad Absurdum. Next, Martin suggests, consider substituting A for B. The result would be
___(1)
A, ~C
:
~A A
_________
#
_____(1)
C
At this point the (Classical) Core logician would object "But hang on, that subproof indicated by the descending dots is 'proving' A from A,~C; but contains the (degenerate) proof A of A:A, which is a proper subsequent of A,~C:A. So, sorry Martin, this substitution instance will not count as a (classical) core proof. For it violates (MM)."
In response, Martin might say "Alright, let's take the logically stronger proof A of A:A in place of that weaker subproof of mine 'of' A,~C:A."
The result of Martin's cooperative concession will be
~A A
_________
#
_____(1)
C
The Classical Core Logician now responds "Whoa, Martin! What's that final step?!" .And Martin replies "Why, the original step of Classical Reductio of the original proof on which we performed the substitution." To which the Classical Core Logician responds "Sorry, Martin, now that the subproof of # from ~A,A no longer involves the assumption ~C, you are not allowed to apply Classical Reductio to get C as a conclusion. For the rule of Classical Reductio requires that the assumption ~C should actually have been used. It's called the 'No vacuous discharge requirement', and it is absolutely essential for the project of avoiding irrelevancies in proofs."
So, what has emerged?: ... the lesson that we cannot rely on cherished (but actually inexplicably over-valued) principles like the following:
(PPUS) [Preservation of Proofhood Under Substitution]
The result of uniformly substituting any formula for an atomic formula within a proof is a proof.
The fact that PPUS fails for the Core systems is not at all disquieting. Just as the Core Logician turns the failure of *Absolutely Unrestricted* Transitivity into a more-than-compensating reassurance of epistemic gain, so too the Core Logician can achieve epistemic gain from the failure of *Absolutely Unrestricted* Substitution. Here is what the Core Logician tells the would-be substituter:
"Take your original proof P involving the atom B, of the sequent S. Uniformly substitute your chosen formula A for B, to get the tree-array P^B_A that might or might not be a proof of S^B_A. Don't gaily assume that this is indeed a legal core proof. Carefully check whether P^B_A violates (MM). Wherever it does, replace the (proper) subproof of the sequent in question by the logically stronger proof of a proper subsequent thereof. And guess what?---all the steps below such replacement either go through, or can be discarded, and the result will be a proof of *some subsequent of* S^B_A."
All these instructions are effective, and can be carried out in linear time. Once again, Core Logic reveals itself to be an amazing source of potential epistemic gain. It has been a woefully neglected source of deductive power. We need to develop an industry of Logical Fracking to exploit it.
Neil Tennant
-------------- next part --------------
An HTML attachment was scrubbed...
URL: </pipermail/fom/attachments/20150905/f2452cf9/attachment-0001.html>
More information about the FOM
mailing list