[FOM] Tennant on Relevant Logic

A.P. Hazen a.hazen at philosophy.unimelb.edu.au
Wed Mar 8 02:45:32 EST 2006


	Neil Tennant's reply has confirmed my vague recollection that 
what he calls  "Classical Relevant Logic" bears little similarity to 
the Relevant (or Relevance) logics of the Anderson-Belnap tradition, 
which Mark Lance, Arnon Avron, Ed Mares and I had had in mind when 
making our posts.  Note that it is the logics of the Anderson-Belnap 
tradition which permit interesting non-trivial negation-inconsistent 
theories, and weak logics of that family which have been explored as 
bases for non-trivial "naive" set theory.

	Some of Neil's remarks  about the history of the subject seem 
to me to be seriously misleading.

	(1) The "Automated Reasoning Project" at the Australian 
National University was initially started (before, I think, Tennant's 
brief stint as professor of philosophy at that university) to explore 
the computer implementation of decision procedures for non-classical 
logics, with some hope that this might help lead to the solution of 
the then-open decision problem for the relevant propositional logic 
R.  (R was subsequently shown undecidable by Alasdair Urquhart, 
without computer assistance: cf. JSL 49 (1984), pp. 1058-1073.)  The 
main early success was the implementation of a decision procedure for 
a weaker relative of R, essentially linear logic without 
exponentials.  (The basic decision procedure had been discovered by 
R.K. Meyer much earlier, but optimizing something a proof theorist 
discovers for practical computer use ... can be a non-tivial task.) 
This work is summarized in the monograph "Automated Theorem Proving 
in Non-Classical Logics" by P.B. Thistlewaite, M.A. McRobbie and R.K. 
Meyer (Pitman (U.K.) or Wiley (elsewhere), 1988).  One technique 
pioneered was the use of small matrix countermodels to prune the 
search: the A.R.P. went on to apply this to a broader range of 
problems and became an internationally known center for research in 
automated theorem proving.  (It  has since been merged into the 
A.N.U.'s general computer science research program.)

	(2) The system R# of relevant arithmetic was defined by Meyer 
in the 1970s.  It is not just the application of the logic R to 
deriving theorems from the usual classical axioms of PA: the 
non-classical implication connective is used in its formulation of 
the principle of induction.  The conjecture that its class of 
theorems was closed under the "Rule Gamma" (if |-~A and |-(AvB), then 
|-B) seemed plausible: after all, analogous closure under Gamma holds 
for weaker systems (e.g. the pure logic R) and also for a stronger, 
infinitary, system of relevant arithmetic that Meyer had  also 
defined.  The proof that it isn't stems  from a result of Harvey 
Friedman's; the observation that Friedman's result applied to R# was 
Meyer's, and the result was reported in a joint paper, JSL 57
(1992), pp. 824-831.

	(3) HAD the theorems of R# been closed under Gamma, it would 
have had the same theorems IN THE CLASSICAL FRAGNENT OF ITS LANGUAGE 
as PA.  Since R# can be shown to have many small finite models 
(models which verify negation-inconsitent extensions of R#), this 
would have made the A.R.P.'s computational techniques a promising way 
to explore provability in PA.  In the 1980s people from the A.R.P. 
(and other workers in relevant logic) spoke semi-seriously of 
attacking Fermat's theorem in this way, but the A.R.P. was not sold 
to the funding bodies as a "We'll prove Fermat's Last Theorem" 
project.

---

Allen Hasen
Philosophy Department
University of Melbourne


More information about the FOM mailing list