[FOM] paraconsistent logic and computer science

Arnon Avron aa at tau.ac.il
Fri Oct 12 06:09:27 EDT 2007


On Fri, Sep 28, 2007 at 11:34:32AM +0200, Joseph Vidal-Rosset wrote:
 
> I have a very simple question to you, because you are also a computer
> scientist. 
> My question is about the definition and the utility of paraconsistent
> logic in computer science, in artificial intelligence, and in science
> in general. 
> 
> In what precise situations have we to allow  $ p, \neg p \vdash q $ ?
> It is very difficult for me to understand what is a "true
> contradiction" and to admit "Dialetheism". I still believe that
> rejecting contradiction as false is a sane methodological position in
> science and in philosophy in general. 
> 
> So, could you convince me with understandable examples that allowing
> paraconsistency, at least in some sense, is useful or even necessary,
> in science? What expert system, or what theorem prover, for example 
> can  accept it, and why?  
> 
> I allow you of course to cc. your reply to the FOM list, if you
> believe  that it can be a clarification for other colleagues. 

Dear Jo,

Let me start with the most difficult part of your question: the utility
of paraconsistent reasoning in science.

First of all, I should clarify that my interest in paraconsistency
is based on purely pragmatic considerations. I do not think
that true contradictions exist "in nature". Moreover: like you, I believe 
that rejecting contradiction as false is a sane methodological position in
science and in philosophy in general. In fact, it is the *only*
sane position as long as  we confine ourselves to
really meaningful propositions. I maintain that it should be clear
to anyone that in every area, a proposition can be
taken as truly meaningful only if it has a definite classical truth-value:
either true or false, and not both (whether we know this definite
truth-value or not is not relevant).

   However, in all areas of science and life we use in our reasoning
(and thinking in general) also sentences that are not really meaningful
for themselves, but only look *as if* they express some proposition.
This is true even in mathematics. In fact,  this was the whole point
of Hilbert's Program - and I basically agree with the Philosophy
behind it (I disagree with Hilbert about the right division between
the meaningful ("real") sentences and the in-principle-meaningless
("ideal") ones, but this is not important for the present discussion).

  Now according to the instrumentalist approach to the philosophy 
of science (on which Hilbert's Program has been based, and to which I 
partially adhere), the "ideal" sentences in some area of knowledge 
are nothing but an instrument for predicting/deriving the truth-value of
real propositions. How to use such an instrument in a particular case
should be decided according to two main criteria: efficiency in getting
the desire results, and reliability of the results obtained by
using the instrument. None of these criteria *forces* us to use
classical logic. An instrument is only an instrument, and at least in
principle, if in certain circumstances another instrument is more efficient
and reliable than classical logic, it would be foolish not to use it.

 Now different areas usually have different criteria for reliability 
and efficiency. Thus in the empirical science reliability means
success in experimental tests. When this is the supreme criterion
I really do not see any reason why inconsistent theories equipped
with some sophisticated paraconsistent inference mechanism may not 
prove to be very reliable and efficient. In fact, I suspect that this is
exactly what happens from time to time in Physics (the dual character
of the light, and the conflict that exists between general relativity
and quantum mechanics might be two cases in point, but my 
knowledge and understanding of Physics are not sufficient for saying
anything reliable here).

 In mathematics the criteria for reliability are of course different.
But even here once we are ready to give up 100% reliability
than I do not find it as totally inconceivable that some combination
of *naive* set theory and a sophisticated paraconsistent logic 
might prove to be no less efficient and reliable in deriving
true arithmetical propositions than the use of
ZF extended by some of the more doubtful axioms of strong infinity.
Such a combination might even better reflect Cantor's (and our) original 
intuitions about sets than do such extensions of ZF. After all,
Cantor was aware of some of the paradoxes, and still he continues
to develop set theory in a fruitful way. Obviously, he was not
applying classical logic freely at that time, but some,
more selective, paraconsistent mechanism (most probably different
from all the paraconsistent systems suggested so far, including mine).  

[Still, it should be noted that choosing classical logic should be
the default, because  it does have two great advantages:
1) Its great advantage from the point of view of efficiency is simply
that we are well acquainted with classical logic, and so it is much 
easier for us to use it. Moreover: the danger of not applying it correctly 
is much smaller than it is  in case we use less natural logics.
2)  From the point of view of reliability, the more the "ideal"
sentences are based on strong intuitions according to which
they do convey after all some truth about some abstract domain-
the greater is our confidence in the results obtained by applying 
classical logic to them. Needless to say,  the use of classical logic 
becomes  100% reliable in case we use in our reasoning 
*only* fully meaningful, real propositions.]

 Turning now to CS and AI, I think that the answer to your question
is here much simpler, and was given by Belnap long ago. The knowledge
store in a KB (Knowledge Base)  usually comes from different sources, 
and so might be contradictory. Now even if one thinks that consistency
should eventually be restored to the KB, doing this in a reliable
and efficient way might take time, and during that time the KB 
must continue to function as efficiently and reliably as possible. 
This can be done only if it uses during that time an inference mechanism that
tolerates the existence of contradiction (and since a KB
is frequently updated, such a mechanism might be practically needed 
all the time).

This message is already too long. So I'll stop here.

Cheers

Arnon Avron


More information about the FOM mailing list