[FOM] Paraconsistent System ---Truth Transfer Policy
Mirco Mannucci
mmannucc at cs.gmu.edu
Tue Oct 24 15:32:20 EDT 2006
Dear Vladimir, thanks for picking up this thread! Here are my comments:
1) PROOF THEORY
---- V.Sazonov at csc.liv.ac.uk wrote:
> Whether it restricts logics or extends it depends perhaps on the point
> of view. Anyway, taking into account a measure of credibility means
> imposing a restriction on derivations.
I agree. However, "restriction on derivations" does not necessarily mean eliminating some
type of rules. Instead, I think one should add a TRUTH TRANSFER POLICY, when rules are
applied. I will try to explain myself: what are deduction rules? At the very core,
they are basic trasformers, that work like this: you give me a multiset of formulae that you
hold true now, and I give you another multiset that will also hold true.
LOGICAL RULES ARE CREDIBILITY VALUES TRANSFER OPERATORS,
from the premisses to the consequences.
Now, I think this transfer should be made explicit by what I like to call a TRUTH TRANSFER POLICY
(TTP). A TTP prescribes which credibility for a specific rule (say ^-introduction in
Natural Deduction) one should assign to the consequents, knowing the credibility of
the antecedents.
A particularly simple (but neverthless important) TTP will be the same transfer cost for each rule:
this TTP corresponds to the slogan "credibility = length of a proof": in this case, one "pays
the same price" as far as the credibility goes, each time his/her proofs branches out one step further
(as you pointed out, this TTP is perhaps all that is needed for Harvey Friedman's new result).
Once you have a TTP, given a concrete proof, one can calculate the total
credibility of the conclusion from the hypothesis (notice that different proofs of the same
formulae can have very different credibility rating).
Assuming that credibility can be any value in [0 1], and that axioms of a theory T
have credibility 1, one can measure the credibility of a particular theorem of T along its available
proofs.
Notice that standard proof theory is simply a proof theory where the TTP is TRIVIAL, i.e.
there is NO DECAY in credibility transfer (the consequence has the same
credibility as its premisses).
A perfectly nice hypothesis, but totally inadequate for what we are
talking about here (in the terms of this thread, such a choice corresponds
to feasibly consistent = consistent).
It is the analogue, in logic, of a typical assumption in basic dynamics: no friction.
Well, it is time to add friction to logic, by introducing credibility decay in proof theory: this will
act as a restriction on derivations in the precise sense that it will make one aware of the "cost"
of using certain rules in a proof (each time one uses a rule one consumes a certain amount of credibility:
that is why F(0) and F(x) --> F(x+1) does NOT mean all numbers are feasible: at each induction step one
loses steam).
FEASIBILITY LOGIC = LOGIC + FRICTION = DAMPENED LOGIC
In some sense, the foregoing is similar to Linear Logic, in that it is resources-aware. But it makes
you aware not of which formulae you have used in a derivation, but which rules. So, this is
RESOURCE-AWARE PROOF THEORY
not resource-aware logic.
Resource-aware proof theory also opens the way to research into "proof optimization techniques":
given an assigned TTP and a goal, which proof is more economical, and thus more credible?
I expect various proposals of concrete TTPs, depending on what one is after: your suggestion
(keeping track of symbols used) is one. Alessandra Carbone (a former student of Rohit Parikh)
has begun a few years ago a quite intriguing study of a "combinatorial-topological" measure
of complexity based on cycles (also, and it is not accidental, in
connection to feasible consistency, see "A.Carbone, Cycling in Proofs and
Feasibility, Transactions of the American Mathematical Society, 352:2049--2075, 2000").
I think there is much more here to be digged up, and lots of surprises ahead...
2) MODEL THEORY
> I am actually working now on a version of feasible set
> theory, more precisely, "polynomial" set theory (based on the ideas of
> Bounded Set Theory for HF whose provably recursive set theoretic
> operations = explicitly definable in the Delta language of BST =
> polynomial time computable).
>
Interesting: I suspect that your set theory (or variants thereof) is the natural one when
one performs the Hyland's construction of the realizability topos, but replaces Kleene's
realizability with linear-time realizability.
Maybe Andrej Bauer, or other "realizability toposophers" on this list, know something to
that effect.
> Such a formalisation should be made so convenient and intuitive that we
> could make the derivations desirably as easy and semiformally as in the
> ordinary ZFC. It is, indeed, rather inconvenient to work in the logic
> with suitably restricted cut rule (or the like). A request on such a
> convenient, psichologically comfortable formalisation of feasibility
> theory allowing to think model-theoretically without paying much
> attention to the formalism and in particularly to the cut rule was
> stated in my paper On Feasible Numbers,
Yes, I fully agree. The only thing is: instead of "manifacturing" models for
feasible (arithmetical or set-theoretical) theories "manually" (i.e. ad hoc), it
would be desirable that we could look at those models, and at the standard ones,
as variants of the same underlying notion.
How? Through a revision of Tarski's notion of satisfaction. But the feasiblity
radius of my message has been already reached, so I''ll stop here...
best wishes
MM
More information about the FOM
mailing list