[FOM] Use of Ex Falso Quodlibet (EFQ) (or ECQ)---Reply to Timothy Chow

Tennant, Neil tennant.9 at osu.edu
Sun Sep 6 22:25:33 EDT 2015


Timothy Chow writes:
____________

I like the idea that maybe the debate is just whether Neil Tennant is
being "revisionary" or not.

On the other hand, I am not sure that deciding whether Tennant is being
"revisionary" is any easier than explicating the precise relationship
between informal and formal proof (and you have agreed that the latter is
a difficult problem).

It's true that forcing mathematicians to review their proofs to see if
they can be formalized in a certain way would be revisionary, but only
because current practice works entirely on the informal level, and *any*
introduction of formal proof (Core Logic or otherwise) would be
revisionary.
___________

If I may make ever so bold as to speak for myself ... No, I do not regard myself as being "revisionary" *in the matter of relevantizing* the orthodox, historically received, systems of Intuitionistic Logic and Classical Logic. I might indeed be "revisionary" qua relevance logician vis a vis *other* relevance logicians, in focusing on the turnstile rather than the conditional connective, and in keeping Disjunctive Syllogism while moderating the assumption of Unrestricted Transitivity underlying both the orthodox systems and rival relevance systems. But I am not "revisionary" in maintaining that all mathematical reasoning is most faithfully regimentable in the two core systems (Core Logic for constructive mathematics, Classical Core Logic for classical mathematics).

John Burgess drew an excellent distinction in his 2005 paper 'No Requirement of Relevance', in Stewart Shapiro, ed,, The Oxford Handbook of Philosophy of Mathematics and Logic, pp. 727-750. The distinction is between `prescriptive criticism' and `descriptive criticism' of classical logic.

The *prescriptive* critic alleges (p.729) "... that classical logic correctly describes unacceptable practices in the community of orthodox mathematicians", while the descriptive critic alleges "... that classical logic endorses incorrect principles that are *not* operative in the practice of that community."

An example of a prescriptive critic is the intuitionist. The intuitionist maintains that, although the use of strictly classical principles (such as Classical Reductio ad Absurdum, or Double Negation Elimination) is indeed part of the practice within 'the community of orthodox mathematicians', this is nevetheless unacceptable, as one would come to realize after conducting an appropriate philosophical critique. Eschewing the offending classical principles is necessary in order for the practice to be made *respectable*.

An example of a *descriptive* critic is the relevantist. The relevantist maintains that, although classical logic enshrines certain inferences such as the First Lewis Paradox A,~A:B, nevertheless such a principle, on closer inspection, is *not* 'operative in the practice of [the mathematical] community'. Eschewing the offending classical principles is necessary in order for the practice to be *respected*.

So I think of the Core logician as a descriptive critic, in Burgess's sense, of  Intuitionistic Logic; and the Classical Core logician is likewise a descriptive critic of Classical Logic.

In addition, the Core logician is a prescriptive critic of Classical Logic.

Being criticized for being the relevant kind of critic here is critical!

How, one might ask, could any well-trained logician take it into his or her head to commend (say) Core Logic for the faithful formalization of constructive mathematical reasoning? The answer is rather disarming, and appeals to two lines of thought or discovery ...

1. After regimenting hundreds of constructive proofs, one finds that all one is using is Core Logic. One discovers that the official, orthodox, historically received system of Intuitionistic Logic, derived from Brouwer via Heyting and others, is actually too 'flabby' for the job. It *over-provides*. The irrelevancies built into its turnstile of deducibility are simply *not needed* for the regimentation of constructive mathematical reasoning. Remember we are talking here about constructivist mathematicians' reasoning; we are *respecting* it, and trying to regiment it faithfully, naturally, 'homologously', by means of appropriate formal proofs. We are NOT criticizing the constructive mathematical practice itself! We are simply begging to differ with other would-be regimenters, who think that the perfect system---the unique system that brooks no improvement in this regard---must be the historically received one, of Intuitionistic Logic.

And there is the classical analog of the foregoing:

1'. After regimenting hundreds of classical proofs, one finds that all one is using is Classical Core Logic. One discovers that the official, orthodox, historically received system of Classical Logic, derived from Frege via Hilbert and others, is actually too 'flabby' for the job. It *over-provides*. The irrelevancies built into its turnstile of deducibility are simply *not needed* for the regimentation of classical mathematical reasoning. Remember we are talking here about classical mathematicians' reasoning; we are *respecting* it, and trying to regiment it faithfully, naturally, 'homologously', by means of appropriate formal proofs. We are NOT criticizing the classical mathematical practice itself! We are simply begging to differ with other would-be regimenters, who think that the perfect system---the unique system that brooks no improvement in this regard---must be the historically received one, of Classical Logic.

Since Core Logic relevantizes Intuitionistic Logic in exactly the same way as Classical Core Logic relevantizes Classical Logic, we have reason to believe that something systematic is going on here. The genuinely relevant, topic-driven, conceptually focused reasoning of mathematicians of either camp is not properly respected by the 'over-provision' of the respective orthodox logical systems.

The second line of thought/discovery leading to the core systems is as follows.

2. After considerable experience with the automation of deduction, one realizes that proof theory's conception of proofs as inductively defined objects built up in the way prescribed by the logical system's rules of inference needs to be complemented with a rather different conception of proof (one that informs the thinking of the computational logician) according to which a proof is something to be 'built up from below' as one takes a problem sequent-to-be-proved, and breaks it up into suitable subproblems. I was struck very early on (in the late 1980s) by the following idea: When a proof-search has reached a certain stage, with a tree of foreshadowed rule-applications leaving one with certain subproblems to be solved, wouldn't it be meet and right (and downright efficient!) to be able to avail oneself of any proof (yet to be discovered) that proves, not just the actual `target sequent' of one of those subproblems, but *any subsequent thereof*? After all, one should lose *nothing* by proving a yet-stronger logical result than such result as would suffice for one's purposes at that stage. So, the idea continued ... why not carefully re-design the rules of inference so that they enable one *always to capitalize on such serendipity in (sub)proof-search*? Make it the case that the skeleton-tree of foreshadowed rule-applications can be left in place (once all the subproblems at the very top have been solved), with each rule-application being preserved *only if needed*. If *not* needed, then the foreshadowed rule-application can simply be omitted. To repeat: all rules must be formalized in such a way that this Global Anti-Dilution Constraint can be implemented. In addition: this constraint, like all other constraints on proof-search, must not destroy the *completeness* of the logical system.

The rules one arrives at, after seeking to make all this the case, are those of Core Logic.

It seems to me that there are around four or five different subcommunities of logicians who bring different perspectives to this aspect of foundational research. There are formal logicians, who teach formal rules but do not do mathematics for their daily crusts. There are philosophical logicians, who debate nuances of methodology and basic concepts, and who have a penchant for formal semantics, but who do not engage much with the nitty-gritty of proof-theory. There are foundational logicians, who take the orthodox systems as unquestioned and unquestionable instruments of intellectual mastery, and who wish to pursue very deep metamathematical issues on their basis. There are computational logicians, who have a keen sense of the immense variety of formal proofs, the ever-present risk of computational explosion when searching for them, and the peculiar *irrelevance*, to *proof-search*, of the fallacies of relevance! Finally, there are the mathematicians themselves, who only very occasionally pause---in their high-level, highly compressed deductive leaps muscled by deeply compiled concepts---to wonder what formal system would *really* be best designed to capture exactly what it is that they are permitted thus to do.

Tim is right when he says "current practice works entirely on the informal level". Mathematicians and logicians (with the sole exception of Frege?) have never *really* done the nitty-gritty of regimenting mathematical proofs as formal proofs of a particular logical system. The logical system has always been gestured at in the background, as something available if one were *really* pushed (and usually on some trifle, like "the empty set is a subset of any set"), but as something that one is always relieved never to actually have to apply.

We have only relatively recently entered the era of automated proof-generation and proof-checking, and it is bound to have reverberating consequences for even the best fruits of past logical labors. Much fruit will fall from the tree of knowledge, because it is overripe and hangs too heavily from its branches. The new growth will have to be able to resist the shake-up of automated proof-search and automated proof-verification, and be spurred on by it. We need rules of inference (by means of which proofs are constructed) that are taut, relevant, and highly efficient---designed to maximize potential epistemic gains. Procedures previously regarded as infeasible in the pre-computer era are now routinely dispatched in milliseconds, even when they involve billions of steps. I believe that investing in precisely the right intellectual equipment at this juncture in the evolution of mathematics will pay handsome dividends in the not-too-distant future. We have to begin with the utterly basic: how best do we formalize deductive reasoning, so as to achieve an adequate explanation of expert practice, and furnish a canon for its future guidance?

Neil Tennant

-------------- next part --------------
An HTML attachment was scrubbed...
URL: </pipermail/fom/attachments/20150907/be29d9e5/attachment-0001.html>


More information about the FOM mailing list