[FOM] NEGATION/CUT Inspired Research

Harvey Friedman hmflogic at gmail.com
Sun Sep 13 19:37:56 EDT 2015


I want to review some promising research topics that I have mentioned
a bit previously.

MATHEMATICS WITHOUT NEGATION.

There is an obvious sense in which the systems without negation are
the "same" as the systems with negation. (I was reminded of this well
known fact office by Arnon Avron).

For, you simply interpret negation as follows. notA is interpreted as

*A --> U, where U is the universal quantification of the conjunction
of all atomic sentences*.

Of course, this works only for a finite language, which is not really
a significant constraint here.

Then all of the provables are translated into provables without negation.

Or, if you use # for the contradiction, instead, or in addition to,
negation, then # is translated into U.

And again, you get provables in the usual systems going to provables
in the negation free system.

An interesting point is this. The usual logical systems such as
Gentzen sequent calculus work perfectly fine without any # or negation
here. So there are no axioms or rules involving # or negation at all,
and in both the classical and intuitionistic environments, one has
exactly the same provables.

Of course, this relies on the obvious fact that in the usual systems
without negation, one can easily go from U to any formula whatsoever.
No rule like from U derive anything, is needed.

So removing negation from logic is an interesting and obvious way of
removing issues some people argue about concerning rules of negation.

Now I didn't bring this topic up just to review this well known stuff.

Take PA. I commented on what happens when you take only those axioms
of PA which have no negation. Under the usual axiomatization of PA,
this means

Sx = Sy implies x = y.
x+0 = x.
x+Sy = S(x+y).
x dot 0 = 0.
x dot Sy = (x dot y) + x.
induction scheme for formulas with no negation.

Note that I don't have any version of Sx not= 0 here.

This is an interesting system, and it obviously "corresponds" to the
usual PA with Sx not= 0 dropped.

And also this system above, if I add Sx = 0 --> S0 = 0, then I will
get a system that "corresponds" to the usual PA.

We have to be careful about what "corresponds" here. I don't quite
mean interpretation in the usual sense, since these negation free
systems have single point models.

Well, here is the phenomena that is quite interesting, and I want to
think about further. Consider the statement

"T does not prove everything".

For the negation free system with Sx = 0 --> S0 = 0, "T does not prove
everything" is equivalent to Con(PA).

But for the negation free system as written above, "T does not prove
everything" is much weaker, and provable in EFA. This is also the case
for the usual system if we drop Sx not= 0.

So I will concede that so far, what I am saying about negation free PA
can already be said equally well about normal PA if we drop Sx not= 0.
But this may not be the case upon further investigations.

Also, we definitely want to look at such issues to the extent
possible, well beyond PA or HA.

See my joint paper with

R.K.Meyer, Wither Relevance Arithmetic?, Journal of Symbolic Logic,
Vol. 57, No. 3, September 1992, pp. 824-831.

I plan to take the discussion to my future numbered postings.

SELF PROVING

Mathematicians are quick to recognize the importance of breaking down
their proofs into bite sizes, many of which are "self proving". That
is, if you simply unravel what is being claimed, and properly cite
what has already been proved there and elsewhere, as well as the
relevant definitions, then you get something which kind of proves
itself. No creativity involved, just unpacking what is going on.

This is different than what a Theorem Prover might be able to find. A
Theorem Prover will find more and less than this. It will definitely
find a lot that is not such a simple unraveling, especially in realms
where it is using some algorithmic packages. It will also, most
frustratingly, find a lot less. But ideally, if you really do make the
right citations, and you really are operating in a good clean
environment that you really understand well, the Prover should be able
to instantly find all of these self proving things. That's where we
want to be.

To support all this, it would be nice to have some good clean theory
and maybe some interesting results concerning Self Proving. And lot of
nice examples where things are nicely self proving, with proper
citations.

HIGHLY NOT SELF PROVING

The most common highly not self proving situation is where you are
proving something, and in order to prove it you MUST bring in external
concepts. Unless you at least cite these external concepts, you do not
expect the Prover to have a clue as to what to do.

Now here I am not talking about pure logic, where there are results
that say that indicate a sense in which you NEVER have to go
externally. I will come back to that in the next header.

NOW HERE is where we want some delicious serious new theory.

I am looking for real world examples of really interesting theorems
where we can show that all proofs must involve notions quite foreign
to the thing being proved.

A particular rich enough environment for such a study, although there
are many other really good environments, is that of PA adapted to the
rationals with a good chunk of primitives.

INT(x) for x is an integer.
<,=,>,<=,>=,not=.
0,1,2.
+,x, division.
exp for rational base, integer exponent.
factorial for nonnegative integers.
usual quantifier free axioms for all of the above.
everything is the division of two integers.
full induction for nonnegative integers.

This should be enough to formalize quite a bit of interesting
examples. Where you can certainly prove the theorem in this system.
But where the formulas involved must be way beyond what is being
proved.

CUT ELIMINATION

The usual cut elimination theorem tells us that in first order logic,
if the formula is valid, then you can prove it without going beyond
the notions involved in what you are proving.

But there is a super exponential blowup. So with any kind of remotely
practical considerations, it is simply not the case that if a formula
is proved in predicate logic, you can be assured that YOU can prove it
in predicate logic with only the notions at hand. YOU may not have the
patience to construct proofs of size 2^2^2^2^1000000, for instance.

We know some classical examples that are quite natural where this
phenomena kicks in. HOWEVER, we don't have any or many real world
examples. AND there is a huge supply of candidates lying around by
looking at results of interacting with Theorem Provers, where one has
properly cited what you need to turn something interesting into a
first order logic proof.

Now I should mention a caution here. If you have proved something in
predicate calculus, then there will be a proof JUST OUTSIDE of
predicate calculus, adhering only to the concepts in the statement.
Well, you are indeed availing yourself of a fixed set of concepts that
is part of this JUST OUTSIDE predicate calculus system I am referring
to.

I now ask for a more radical REAL WORLD problem. Find a real world
first order validity, say coming out of the Theorem Prover data, where
any proof even in systems going rather significantly beyond first
order logic, is ridiculously large.

Sometimes there is controversy about just what cut avoidance means. I
regard this as mostly rhetorical. The acid test is: what notions are
being invoked in the proof? If the notions are foreign to those in the
statement being proved, then you have the essential feature of cut NOT
being eliminated - even if you say it has, because the burden is taken
up by some other rules.

Harvey Friedman


More information about the FOM mailing list