FOM: Relevance

Neil Tennant neilt at hums62.cohums.ohio-state.edu
Wed Nov 19 06:22:35 EST 1997


I would like to raise what might be a new topic for discussion on the
list: the importance of 'relevant' logical reasoning for actual
mathematics.

For the purposes of the discussion that I would like to initiate, I shall
define a logic as 'relevant' if and only if it does not permit the
derivation of an arbitrary proposition B once one has proved (for some
A) both A and ~A; that is, the 'absurdity rule' is not correct in the
logic concerned. Or, expressed another way, thinning (dilution) is not
acceptable. 

There are two different ways to 'relevantize' a logic *after* giving
up thinning:

1. Change the stock of theorems and inferences involving the
object-language conditional, but keep the other important structural
features of the deducibility relaton, such as unrestricted cut,
intact. Disjunctive syllogism in the form 'AvB, ~A, therefore B',
however, would have to go.

2. Give up unrestricted cut and thinning, but keep disjunctive syllogism.

(1) and (2) are the understandable reactions to the classic Lewis
proof of the undesirable 'A, ~A; therefore B':

	A; therefore (AvB)	(v-Introduction)
	(AvB), ~A; therefore B	(DISJUNCTIVE SYLLOGISM)
	Hence (by CUT on AvB):   A,~A; therefore B 

One obviously has to reject at least one of the capitalized rules if
one accepts v-Introduction and wishes to have a relevant logic.

Route (1) is that taken by the Anderson-Belnap tradition. Logics such
as R and its neighbours deal with a reformed arrow (object-language
conditional) but enjoy unrestricted cut. In my own work I have explored
the alternative route (2). In a natural deduction context, this
involves only five (minor?) changes to standard 'non-relevant' practice:

(a) banning the absurdity rule;

(b) insisting that proofs be in normal form;

(c) requiring, for the application of 'discharge' rules (such as
~~-intro) that the assumption to be discharged actually have been
*used* in the subproof;

(d) liberalizing slightly the rule of v-Elim (proof by cases) so that
if one case leads to absurdity then the conclusion of the other case
can be brought down as the main conclusion;

(e) re-stating >-Intro (conditional proof) as a two-part rule:
(i) if you have inferred B then you may discharge A (if it has been
used as an assumption to get B) and infer A>B; 
(ii) if you have reduced A to absurdity you may infer A>B and
discharge A.

When these reforms are carried out, one has the following
relevantization theorem:

THEOREM: There is a linear-time transformation that turns any cut-free (or
normal) proof of a conclusion A from a set of X of assumptions into a
relevant proof of A or of absurdity from some subset of X.

COROLLARY 1. Mathematics, if consistent, can be done wholly within
such a relevant logic. That is, ordinary mathematical reasoning can in
principle be 'relevantized without loss'. This holds both in the
classical case and in the intuitionistic case.

COROLLARY 2. Relevantization does not increase the complexity of the
decision problem in the propositional case.

These corollaries contrast with the following situation in
Anderson-Belnap style relevant logic:

1^AB. A-B relevance theorists have never given a unifrom recipe for
re-deriving mathematics relevantly (in their way); at best they
relevantize in piecemeal fashion, and might well lose certain
important theorems, such as the four-squares theorem.

2^AB. A-B propositional relevant logics have horrendously complex
decision problems, if (unlike R) they happen to be decidable.

Because of (1) v. (1^AB) and (2) v. (2^AB), I strongly favour approach
(2) described above. One principled drawback to this approach to
relevantization, however, is that mathematicians actually use cut all
the time; and it massively reduces length of proof. That is, they
prove lemmas from axioms, and then use these lemmas to get deeper
theorems. The lemmas are 'cut formulae' in any formal reconstruction
of their proofs.

Question: could there be a theory of "mathematicians' intuition" as to
when *even while cuts are being used* the underlying reasoning will
turn out to be genuinely relevant? The question can be posed more
precisely as follows:

Call a proof P using cuts *genuine* just in case its result X:A turns out
to be the result of the normal (cut-free) relevant proof that would
result from P by eliminating cuts (normalizing) and then applying the
transformation of the preceding theorem to relevantize it. Thus,
genuine proofs are ones all of whose assumptions are indeed relevant for
obtaining the conclusion, even though cuts might be used along the
way. In the (very time-consuming) cut-elimination process and the
(linear-time consuming) relevantization process, all the undischarged
assumptions survive as 'used' assumptions.

Now: Could there be a *tractable* algorithm for determining
genuineness of a proof with cuts even though there is blow-up in
turning any genuine proof into a cut-free, fully relevant one? 

If so, then the relevantist of type (2) could claim to have a workable
explication of mathematicians' intuitions of relevance in their
reasoning. They would presumably be capable of applying the tractable
algorithm in question to tell that there was 'underlying relevance',
even though they would not bother to undertake the blow-up that would
be involved in producing an actually relevant, cut-free proof.

Reactions?

PS. Anyone wishing to have a formal statement of all the details of
this alternative approach to relevantization will find them in ch.10
of my book 'The Taming of The True', Oxford University Press, 1997.



More information about the FOM mailing list