[FOM] Proof Identity
Alessio Guglielmi
Lists at Alessio.Guglielmi.name
Tue Dec 6 15:26:33 EST 2005
Hello,
in <http://www.cs.nyu.edu/pipermail/fom/2005-November/009377.html>
Milos Adzic asks for opinions about the current state of the art in
the proof identity problem, and more specifically about the
normalisation and generalisation conjectures of Prawitz and Lambek.
Here is my modest opinion.
In slogans:
- Proof complexity should be taken in consideration for proof identity.
- Different notions of normalisation should be taken into account as well.
- A bigger choice of inference rules will be beneficial.
- Making progress on proof identity along these three lines is
difficult if we remain in the realm of Gentzen formalisms (and their
categorical semantics).
A little bit more in detail:
I don't know Lambek's generalisation enough to feel like saying
anything, although I think it is very interesting. About Prawitz's
normalisation: for sure, a lot of interesting research has come out
of it. However, we all know that it doesn't work well for classical
logic. Proof theorists instinctively associate cut-free proofs *in
some way* to proof identity. In my opinion, this tends to border on
therapeutic fury, in many cases.
Intuitively, cut elimination is generally considered as the process
of replacing lemma statements with their proofs: if we have a cut
A -> B B -> C
--------------- ,
A -> C
involving lemma B, we want to produce a cut free proof of A -> C by
composing the ones for A -> B and B -> C, what morally consists in
replacing an invocation of statement B (and possibly more than one)
with its proof. At first sight, it looks like this normalisation
shouldn't affect proof identity.
However, cut-elimination does not preserve proof complexity (modulo
polynomials), since it introduces at least an exponential blow-up.
This is a strong reason for anybody to reject such notion of proof
identity, I believe.
There are other, very natural reasons, especially in classical logic,
to abandon cut-free-based proof identity. For example, this also is a
cut:
A -> B -A -> B
---------------- ,
B
but we can regard it in a different way than before: it proves B by a
*case analysis*. We generate two mutually exclusive hypotheses, A and
-A, and we use them to prove B in more favourable conditions. Are we
sure that we want to equate a proof by cases to its normal form, a
proof where the case analysis is completely absent?
An example is provided by the proofs of Ex.Ay(p(x) -> p(y)). One can
argue that a case analysis proof of this formula should not be
considered the same as the cut-free one (which in this case is even
shorter, by the way). See the (sketchy) note
<http://www.computational-logic.org/~guglielm/p/AG6.pdf> for some
more details.
In practice, people tend to investigate concepts for which there are
tools. Since cut-normalisation is one of the very few tools available
in proof theory,
people are led to use that, but it might be time to change, and
create some new tools.
This brings me to my main point: there is a need for proof theoretic
formalisms where more, different normalisation notions are possible,
some of which should be especially geared towards proof identity.
Another simple example: wouldn't it be nice to have a normal form
where all contractions are at the bottom of a proof (say)? For *very*
stupid reasons, this is impossible in the sequent calculus (see
<http://www.iam.unibe.ch/~kai/Papers/RestContr.pdf>). Of course, a
normal form where all contractions are at the bottom would be very
handy for many notions of proof identity...
This and other normal forms are very natural in deep inference (see
<http://www.iam.unibe.ch/~kai/Papers/phd.pdf>). The deep inference
community is very active on proof identity (see
<http://alessio.guglielmi.name/res/cos>), especially because deep
inference gets rid of all the bureaucracy associated to rule
permutations. This, however, is too long a story to tell here.
Last opinion (this email is getting too big): something which is
rather neglected, or perhaps considered implicitly established, is
the choice of inference rules. For many, inference rules are more or
less those given to us by Gentzen. In reality, they are what they are
simply because it looked convenient, and it worked well for other
purposes, to base the construction of proofs on the syntactic
structure of theorems, but *it doesn't need to be that way for proof
identity*.
The problem of proof identity depends on the notion of proof, which
in turn depends on the notion of inference rule. A question that
should be asked, then, is: how can I design inference rules such that
I capture a given notion of proof identity? In deep inference,
there's a huge choice of inference rules, and established techniques
for dealing with normalisation of a much greater variety of rules
than in any other formalism. A bigger choice of rules should then
mean a bigger choice of identity notions, so a better chance of
getting them `right'.
Cheers,
-Alessio
More information about the FOM
mailing list