FOM: relevance and interpolation

Stephen Ferguson srf1 at st-andrews.ac.uk
Thu Nov 20 12:26:13 EST 1997


I read Neil Tennant's comments about relevant logic with interest, in
particular his clear statement of the differences between his progream of
relevance, and the more well known program of Anderson and Belnap (etc)

I have a few thoughts to throw out, which I'm fairly certaina re
related to this, but I can't quite see what.
I hope that these thoguths will make sense to *someone* on the list,a 
nd maybe these issues can be pushed to get us somewhere.

The first thought I have concerns Craig's Interpolation Theorem. (JSL
1956) His result was about -->, but Lyndon generalised it to cover
sequents. Let A and B be sets of formula (what gets written in the usual 
presentation of LK rules as capital sigmas and deltas)

The theorem states that if A|-B, then eitehr, A and B have non-logical
vocabulary in common, in which case, there is some C containing only that
vocabulary, such that A|-C and C|-B, else A is a contradiction or B is a
theorem.

This result holds for Intuitionistic logic, most modal logics (Quantified
S5 fails); it fails for E, T and R, but holds (I think) for RM. It holds
for second order classical logic, but cardinality logics such as L(Q0) and
L(Q1) fail to have the Interpolation property.

Okay, here is my thought: interpolation somehow shows the relevance of
each stage of a proof - not a particularly new thought, but one not taken
very seriously in the literature. Tennant is ruling out one of the three
cases, ie A|-B and no common vocab and A is a contradiction. Thereby
restrictiong cases to A|-B with common vocab, or B is a theorem. 
(Q Is he also ruling out A|-B where no vocab and B a theorem, by
restricting thinning, as this should just be |-B, and the A is
irrelevant?)

So it seems that the changes in the rules that Tennant is suggesting, is
merely an explicit way to ensure that all genuine sequents are equivalent
to those that interpolate in the original calculus, be it classical or
intuitionistic.

Secondly, the connection between logic and mathematics. There is something
which Detlefsen brings out in his article on the Russell/Poincare:
Poincare seemed to have help that even if one could reduce all
mathemeatical proofs to logic, they logical proofs alone would not suffice
to show anything mathematical.

His thought runs something like this: If I know A mathematically, and I
have a logical proof of A-->B, I then have a proof of B. But I don't have
a mathematical proof of B, as I have not showed any sort of connection
between A and B, I have not exposed the "inner architecture"
Would a Tennant-style relevantization of the logic have satisfied
Poincare, or do his objections require an even stronger notion of relevant
connection, than Tennant's methods could provide?

Finally, I always wonder what we are trying to achieve with an account of
validity - take the classical account. I guess people usually try to
present it as sufficient and necessary conditions for an argument form to
be valid; what would be lost if we thought that these supplied only
necessary criteria, but that by merely being classically valid, we have
not yet shown that we have a sensible argument form. Then we could make
ruther requirements on the sufficiency condition, e.g. classically valid
plus interpolation or T-relevance or AB-relevance, whatever.
Depending on how this is answered, may give me something to say about
Neil's problems with Lemmata and their role in relevant reasoning.

Am I barking up the wrong trees with these, or do any of them contain any
mileage?

Yours

Stephen


----------------------------------------------------
Stephen Ferguson       http://www.st-and.ac.uk/~www_spa/STUDENTS/srf1
27, North St, St Andrews, KY16 9PW

Logic and Metaphysics, Univ of St Andrews
(01334) 462484
----------------------------------------------------





More information about the FOM mailing list