[FOM] From EFQ to Research 2--reply to Harvey Friedman
Tennant, Neil
tennant.9 at osu.edu
Thu Sep 10 12:10:08 EDT 2015
Harvey Friedman has produced a second version of his list of ten questions, taking into account my earlier responses to them. This is called dialectical progress; and I am happy to see if it can continue. I think the current set of answers below might involve some elements of repetition, though, since the re-posed questions themselves do. One has to bear in mind that on a forum like fom it may be that something becomes truer the more often it is said, just as a question can become more urgent the more frequently it is raised (regardless of earlier answers given). :-)
_______________
"1. What can we prove about the Tennant turnstile? With propositional
calculus with atoms p,q,r,... we do not have p,notp |- q. But we do
have p |- notp arrows q. What essential feature, capable of
generalization, does one have but not the other?"
I gave a long list of properties of deducibility in Core Logic and Classical Core Logic in my earlier reply. But in order to keep this particular reply productively focused, I shall confine myself to citing one 'essential feature'---namely, that relation of relevance among premises and conclusion of any classical core proof (hence: any core proof) that I have described in previous postings, and that was estabished in
‘The Relevance of Premises to Conclusions of Core Proofs’ [pdf], Review of Symbolic Logic, 8, no. XXX, 2015, pp. XXX-XXX, DOI: http://dx.doi.org/10.1017/S1755020315000040
_______________
"Very limited partial results only are available at present. One is,
according to Tennant, that if A |- B by truth tables, then A |-' B or
A |-' # in the Tennant turnstile |-' (classical logic)."
Let's re-phrase this: "One [partial result] is, according to Tennant,
that if A |= B by truth tables, then A |-' B or A |-' # in the Tennant turnstile
|-' (classical core logic)."
Likewise, Harvey's
"One immediate question that this suggests is, what is the relationship
between A |- # by truth tables, and A |-' #?"
should be re-phrased as
"One immediate question that this suggests is, what is the relationship
between A |= # by truth tables [i.e., a column of Fs under (all members of) A in the truth table], and A |-' #?"
If A is a single sentence, then A|=# by truth tables implies A|-'#.
If A is a (nonempty) set of sentences, then A|=# by truth tables implies that for some (nonempty, possibly proper) subset A* of A we have A*|-'#.
"Also does the obvious analog of the Tennant partial result above hold
for intuitionistic propositional calculus, or classical predicate
calculus, or intuitionistic predicate calculus?"
Yes, for all three of them.
___________________________________
"2. What is being offered for first order predicate calculus? I have
not seen the Tennant turnstile in action on the FOM with quantifiers."
The first-order extensions of the propositional systems are straightforward, without any modification of the orthodox Gentzenian rules for the two quantifiers. The only change imposed in Core Logic is that all elimination rules are in parallelized form, with their MPEs standing proud. (This is what ensures normality of proofs.) In this respect, it is only the elimination rule for the universal quantifier that changes. It becomes
___(i) ... ___(i)
Ft1 , ... , Ftn
:
(x)F(x) G
______________(i)
G
At least one subordinate assumption of the form Ft must have been used in the derivation of the subordinate conclusion G; this ensures relevance (or better: prevents irrelevance from creeping in, through what would otherwise in effect be unnecessary thinning).
Note that the orthodox systems of C, I and M can have their universal-elimination rule formulated in this way too. So the introduction of quantifiers into the language does not usher in any big new metalogical changes in the relationships among Core and Classical Core, on the one hand, and C, I and M on the other.
______________________________________
"According to Tennant, the axioms and rules for quantifiers are
unchanged."
Yes.
"But there may be unexpected interactions. What would a
detailed analysis of Tennantism look like for, say, the usual proofs
of
n^2 = 2m^2 has no solution in nonzero integers
p^2 = 2q^2 has no solution in nonzero rationals."
I cannot say at this stage. Send me your favorite informal proofs of these results, and I shall try to core-ify them, time permitting.
_______________________________
"3. It has been suggested that cut is not needed to formalize actual
mathematical practice. I take this to be prima facie false from the
fact that the blowup in cut elimination is vicious, and there are
beautiful examples. Yes, I am using a 21st century notion of "need"
here. HOWEVER, we do not have varied examples of this and related
phenomena. At the level of pure logic, I suggested looking at the
contests between logic provers, and also the experience of logic
provers - where perhaps they are even cutting all the way through
whole forests?"
I cannot speak for other logic provers; but as far as mine are concerned, which are fully automated, cut is eschewed altogether.
"Also, provers in the context of proper axioms."
Yes, I think I already conceive of a prover in this way. Its job is to establish deducibilities from sets of premises [i.e., proper axioms]. The problems it is given take the form
X?-A
and positive solutions take the form of a core proof of A or of #, from some subset of X; while negative solutions take the form 'There is no proof of A or of # from (members of) X'.
"What in fact does one see? I would imagine a lot of cuts by the machines, and
probably not removable without some blowup. I don't know if such
unremoveability has been established?"
I don't know. As stressed above, you would see no cuts at all in any (positive solution) output of my prover. But I cannot speak for other provers (human ones, fashioning mechanized ones). Perhaps some of those human beings are on this list, and can chip in here?
__________________________
"4. A related issue is of course simply that in the presence of
standard proper axioms, one wants to prove an interesting thingie. One
should often find that you must invoke proper axioms that are very
very far from the thingie to be proved. Document this, and prove that
it is essential in fundamental real world situations."
Nothing further to say on this one. It does not strike me as speaking in any particularly focused way to the issue of core v. orthodox systems. It strikes me rather as a problem for both sides to deal with.
___________________________
"5. For real revisionary-ism, let's ban negation entirely - on the
grounds that negation is simply causing too much controversy on the
FOM (smile). What happens? Systematically, what is really going on?
Maybe even what is going on philosophically? This looks like it has a
lot of deep technical ramifications of an unexpected sort, and my own
view is: where there is new mathematical smoke, there is often new
foundational fire."
OK; but do you/would you still permit the use of *atomic* inferential rules for either conceptual inclusions or conceptul contrarieties? The latter take the form
A B
_____
#
Of course, the temptation would be ever-present to introduce into the language a negation operator with the obvious (relevant!) introduction rule; but you could refuse to yield to that temptation, and study "negation-free" fragments that allow for proofs with # as their conclusions.
_____________________________
"6. Even mathematicians who are disgusted by the very thought that
someone can criticize the law of excluded middle and related badass
things that are done all the time, realize that something deep must be
done to avoid it with a so called constructive proof, that often gives
you clearly identifiable computable information, or strongly explicit
information. What is the best example, if any, of just exactly what we
know if we have conformed to the Tennant turnstile?"
If you have proved some theorem A from some set X of proper axioms using (Classical) Core Logic, what you know is that the argument/sequent X:A is valid, AND that Relevant(X,A), where 'Relevant' is defined as in my earlier posting. So you will be assured that there is No Funny Business in your proof.
"If you conform to
not using negation at all, then this has profound consequences of an
algebraic nature, in the case of arithmetic."
I'll take your word for that.
"Now I recognize the
difference here that the Tennant turnstile does not affect what is an
acceptable theorem, just what is an acceptable proof."
Yes.
"So the answer
for the Tennant turnstile has to be more nuanced. That something
happens of a clearly identifiable mathematical nature in the proof
conforming to the Tennant turnstile that doesn't happen in a proof not
conforming to the Tennant turnstile?"
Once again: Relevance. No Funny Business. And if you think this is no big deal, let me respond by pointing out that for decades now the apologists for EFQ have been telling beginning students in logic courses that we MUST have EFQ in our formal systems in order to be able to represent mathematical reasoning AT ALL. That's an unargued shibboleth that has no warrant whatsoever. Why should a contradiction in one area of our thinking commit us to any and every statement in other areas of our thinking? Both beginners and experts have strong paraconsistency-favoring intuitions, and heavily-trained (not: *well*-trained!) logicians are simply *taught to say* that EFQ is both perfectly in order and indispensable, all the while crushing down those paraconsistent misgivings, like someone suppressing a barf at a formal banquet just because they don't want to offend the hosts whose shrimps turned out to be bad.
_____________________________
"7. There is an attempted argument against intuitionists that goes
something like this. Well, I can PROVE that classical PA is sound.
Therefore why are you (intuitionists) complaining about classical PA?
The "fallacy", of course, is that there are nonconstructive steps in
this soundness proof. In this case, the offending steps are very
clearly identifiable. Now, what if I attempt to prove that PA is sound
and the issue is not LEM?"
The formalization in Classical Core Logic of your soundness proof for PA will reveal how all your reasoning is not only valid, but also relevant.
_____________________________
"In a sense, this is different because
Tennant claims that we can still prove everything we normally prove by
conversion. So then a couple of scenarios arise:
a. Such soundness proofs for the usual systems are Tennant certified.
Then we could have everybody quote that result and proof, and then go
about our usual business ignoring the Tennant turnstile. I.e., use
this as a rubber stamp.
b. Such soundness proofs for the usual systems are Tennant
decertified. Then are these soundness proofs similarly fixable the way
Tennant fixes proofs now? If so, can Tennant rewrite the proof in a
satisfactory Tennant form? Then we can use this fixed proof as a
rubber stamp
In any case, we have the prospect of having a good Tennant Rubber
Stamp, so that the mathematicians can sleep well.
Tennant strongly believes that such soundness proofs can be adjusted
in Tennant standard ways in other to get Tennant certified, thus
leading to a fine Tennant Rubber Stamp."
Amen to all of the above. Remember, the Core logician/mathematician is claiming only that all expert and correct mathematical reasoning, BECAUSE it is always relevant, can be faithfully regimented in (Classical) Core Logic.
____________________________________
"8. Automated conversion? Has Tennant written software that will
convert an arbitrary bad proof into a good one?"
No; because I prefer(red) to devote my time on such projects to writing software that produces only good ones.
_____________________________________
"Then instead of a
rubber stamp, we could have a parallel set of Journals called the
Tennant Journals, which simply takes the proofs appearing in the
present Journals and converts them to proofs conforming to the Tennant
turnstile, and there would be a huge discount available for these new
Tennant Journals."
Nothing to add in reply here; I dealt with the issue of such journals in my earlier posting
http://www.cs.nyu.edu/pipermail/fom/2015-September/019085.html
________________________________________
"9. It has been suggested that somehow the Tennant turnstile might be
useful in improving current Proof Assistants - i.e., provers. I have
severe doubts about this, ..."
My own suggestion is that *fully automated* deduction is better off proceeding within a core system. If you are a constructivist, use a proof-finder for Core Logic; if you are a classicist, use a proof-finder for Classical Core Logic. At the propositional level, the proof-finder for Core Logic suffices for both tasks, because in order to prove a classical result X:A, it suffices to prove X,~A:# in Core Logic, and then perform a terminal step of Classical Reductio. That, however, might not be optimally efficient, since at the propositional level Classical Core Logic (like Classical Logic) is (co-)NP-complete, whereas the decision problem for Core (like that for Intuitionistic Logic) is PSPACE-complete. So the trick of merely terminal classical steps comes at the price of some increase in complexity.
So-called Proof Assistants are for *interactive* theorem-proving, not for theorem-proving of the fully automated kind. Proof Assistants provide for human intervention, typically consisting of *suggestions of interpolants* in order to create more manageable links in long logical chains of reasoning. That reduces the complexity of the residual deductive tasks, 'in between' (i.e. the actual links in the chain), which can be left to the Assistant, now running in fully automated mode. Now, here's the rub: you, the human interactor, make the whole thing more manageable for the Proof Assistant by making highly intelligent and creative suggestions of interpolants; the Proof Assistant then just manages to grind out the bits of proof 'in between', in an automated fashion. Where A1,...,An were the human interactor's suggested interpolants, we have the following overall chain [where X1,...,X(n+1) are subsets of X, in the deductive task X?-A]:
proof P1 of interpolant A1 from axioms X1;
proof P2 of interpolant A2 from axioms X2 plus assumption A1;
proof P3 of interpolant A3 from axioms X3 plus assumption A2;
:
proof Pn of interpolant An from axioms Xn plus assumption A(n-1);
proof P(n+1) of conclusion A from axioms X(n+1) plus assumption An
Each of the proofs P1,...,P(n+1) is feasible, and the overall task X?-A is feasible.
BUT what if one had to eschew the human input, and had to program the system to *work out for itself* what those choices of interpolants A1,...,An should be? Some of Harvey's own early work showed that choosing interpolants takes hyperexponential time even in the propositional case.
SO the chagrinned conclusion must be that both classical-proof-discovery and classical-core-proof-discovery (for the problem-task X?-A) is BOUND to be hyperexponential/infeasible. If classical-proof-discovery allows for creative input from the human interactor to guide the computation with suggested interpolants, then one is 'sweeping under the rug' the computational problem of determining the interpolants themselves. The 'feasible' discovery of the proof-chain P1,...,P(n+1) of *classical* proofs will, to be sure, enable the core logician to extract classical core proofs P1*,..., P(n+1)* in linear time, and then calculate the overall reduct
[P1*,[P2*,[...[Pn*,P(n+1)*]...]]
---but this last only at the cost, in general, of hyperexponential time.
The Proof-Assistant-praising detractor from (Classical) Core Logic can, however. take no comfort from this last observation. This is because s/he has yet to automate the process of interpolant-discovery!---which, too, will be hyperexponential in general.
If both sides have their feet put to the fire, by insisting that they undertake *fully automated proof-finding*, then, I contend, the Core logician could well enjoy a significant advantage. A direct search for a (classical) core proof, not proceeding via interpolants, has, to my mind, just as good a chance as, and perhaps, in general, even a better chance than, a direct search for either a classical proof in normal form, or a classical proof making use of cuts involving interpolants (which have to be discovered, mind!).
Harvey appears to be aware of this problematic, but---to my mind, inexplicably---appears to be leaning toward the orthodox systems as the most hopeful ones within which to conduct automated proof-search:
"... but it does raise the general issue of just
how EFQ and related things get used or non used in the Provers arena?
Independently of this, what do we now know about the frequencies and
patterns present in the Prover logs as to the various logic rules and
logic moves? I think that it is still so painful to get a serious
proof out of the required intimate relationship with a Prover, that
ANY legitimate - in the usual sense that we use - move that stops or
lessens the agonizing pain involved in such work will be seized on
without hesitation - even if it does not conform with the full
approval of (the) Tennant (turnstile). Such are the temptations of
evil. So let's get some hard data on what is really gone on in the
intense battlefield."
I agree about the need for hard data.
"Tennant has generously offered his own data for perusal. But I was
rather thinking of the stuff in from the Proof Assistants currently in
intense use to create rigorous proofs of very impressive pieces of
mathematics. This is not a one man show, not even for Neil."
I never dreamed it could ever be! But I do believe that what I learned, as a proof theorist, from my engagement with automated deduction, is that the traditional systems of natural deduction and sequent calculus were not optimally attuned as frameworks within which to conduct efficient proof-search. One needs to re-conceptualize natural deductions and sequent proofs along the lines of core proofs. This is done without loss of completeness. Then one needs to prove lots of normalization theorems, about how if there is a proof at all for a deductive problem of such-and-such syntactic form, then there is a (core) proof of so-and-so highly constrained form. I found immense speed-ups occurring in the resulting proof-finders that were equipped with these prooof-theoretically motivated constraints on search. And at the same time I found creative 'backwash' from the computational endeavor, as it re-shaped my concept of what kind of thing a proof actually is. It is not just an object that can be inductively defined, so as to enjoy only a 'top-down' pedigree of construction. It is *also* an object that one can home in on as the terminus of a search, and which therefore can be understood as enjoying also a(n efficient!) 'bottom-up' pedigree of construction. That is how I arrived at the concept of core proof--never once relinquishing the overriding goal that it furnish an adequate framework for all mathematical reasoning. The pursuit of relevance was motivated entirely by these computational concerns, in order to keep searches properly focused, hence more efficient. So the kind of relevance embodied now in the core systems is a *mathematics-friendly* one, NOT the kind of relevance that you get from tinkering philosophically with axioms and rules for the arrow connective, without any concern for how that might render the resulting logical systems absolutely toothless when it comes to chewing on informal expert mathematical proofs and trying to formalize them.
_________________________________
"10. Finally, I often like to wear Black Hats, at least temporarily.
Can we improve philosophically on the argument that mathematicians
using things like EFQ (and maybe cut) should be jailed or at least
fired?"
Again: I contend that they don't *really* use such things. Only a slightly defective logical tradition projects that delinquency in thought onto what mathematicians typically do when inferring conclusions from premises. And there will be the occasional expert mathematician who is also steeped in this slightly defective logical tradition ... who thinks (mistakenly, though) that s/he *really needs* EFQ in order to formalize a particular little proof. Ahem!
"Can we improve on the argument that things like EFQ (and maybe
cut) are philosophically defective? Perhaps by comparison, with the
really "nice" moves? This spills over of course to 1 (which is more at
a technical level). And at the risk of going around in circles, I stop
here."
I don't know what is necesary and/or sufficient for an argument to succeed in showing that a proposed rule (like EFQ) is "philosophically defective". But I *can* point to EFQ's simple violation of the above-mentioned relevance condition, which is violated neither by ~A:A->B nor by AvB,~A:B. The latter moves are really "nice", because in addition to being valid they also satisfy the relevance condition. And, to repeat: *all* mathematical reasoning is like that.
______________________
"11. Tennant has emphasized that all classical (intuitionistic)
validities (provabilities) are provable in his setup. But is there a
rigorous sense in which Tennant's system is either minimal or minimum?
Minimal would mean that any, or any reasonable, weakening of Tennant's
setup results in losing some validities (provabilities)."
I have argued in my book Changes of Mind that Core Logic is indeed minimal, in the sense that every one of its rules is needed in order to carry out the metalinguistic reasoning that is involved when a rational agent tries to work out how to revise her beliefs when she wishes to take on a new belief that contradicts her existing ones. So Core Logic is the inviolable core of logic itself. That, in fact, is why I now call the system Core Logic.
It would be interesting to investigate whether this 'minimal inviolable core' argument in the setting of belief-dynamics could be complemented by one focusing only on issues of logical closure---i.e., what one might call 'belief-statics'. Since the rules of Core Logic are all of them logical rules, not structural ones (apart from A:A [Reflexivity] in the case of sequent calculus), I cannot see how giving up any of those rules could fail to cripple the system in some potentially indispensable regard, for the purposes of formalizing all mathematical arguments.
Neil Tennant
-------------- next part --------------
An HTML attachment was scrubbed...
URL: </pipermail/fom/attachments/20150910/c2b1c813/attachment-0001.html>
More information about the FOM
mailing list