# [FOM] "Proof" of the consistency of PA published by Oxford UP

Richard Heck richard_heck at brown.edu
Thu Mar 5 15:35:20 EST 2015

```On 03/05/2015 07:56 AM, Michael Kremer wrote:
> The following is related to Richard's points...

Hi, Michael! Hope things are well with you.

> On a quick skim I find the final section of the paper confused. He
> wants to ward off the claim that his argument requires an infinite
> set. He does this by claiming that his argument does not require an
> infinite model; and that he does not have an unrestricted
> comprehension scheme which would allow one to construct the set of all
> models. But his argument requires that there is no upper bound on the
> size of his models. Whether this requires an infinite set or not, it
> surely requires in some sense that there are infinitely many blocks.
> (There is no finite bound on the number of blocks.)

Yes, that is clearly correct.

> Moreover, the argument makes use of an operation of concatenation
> which yield larger arrays of blocks out of smaller arrays of blocks.
> And this operation needs to satisfy some set of axioms with sufficient
> similarities to the axioms of arithmetic to make one suspicious that
> the theory of concatenation is at least as strong as arithmetic.

Your use of the term "concatenation" does rather remind me of Quine's
paper "Concatenation as a Basis for Arithmetic". One could of course
also prove the consistency of PA by modelling arithmetic in strings---as
Hilbert more or less wanted to do.

> If there are only finitely many blocks in the universe we would be in
> a position in which the concatenation operation could not always be
> applied. (If there are 1000 blocks in the universe, we could make an
> array of 600 blocks, and an array of 601 blocks but we could not
> concatenate them as they would always have overlap.)

The argument for induction uses what McCall calls "finite descent",
i.e., a version of the least number principle. So it seems clear enough
that the principles used to reason about blocks are at least as strong
as those of PA. Indeed, "finite descent" here is being used in
connection with the predicate:
~F is true of a linear array of N blocks
I.e., he is reasoning inductively about models. So it appears that the
theory in which he is working is roughly equivalent in strength to
something like PA + a compositional truth theory with extended
induction. And of course that theory is well known to prove Con(PA).

One thing I did learn from the paper concerns how McCall expresses
induction, as the rule of inference:
From |- F(0) and |- (x)(F(x) --> F(Sx)), infer |- (x)(F(x))
Note that this is really a "rule of proof", in the way that
necessitation is a rule of proof: The premises of the inference have to
be *theorems*. This is entirely different from formulating induction as
a rule of inference:
F(0), (x)(F(x) --> F(Sx)) |- (x)(F(x))
which is of course equivalent to the usual scheme. I was worried for a
bit that this must be weaker than induction in the conditional form, but
in fact it seems to be equivalent.

To prove induction as a conditional:
F(0) & (x)(F(x) --> F(Sx)) --> (x)(F(x))
note that this is equivalent to:
(z)[F(0) & (x)(F(x) --> F(Sx)) --> F(z)]
And we can now prove this by induction on z. So we have to prove:
F(0) & (x)(F(x) --> F(Sx)) --> F(0)
which is trivial, and
(z){[F(0) & (x)(F(x) --> F(Sx)) --> F(z)] --> [F(0) & (x)(F(x) -->
F(Sx)) --> F(Sz)]}
Assume [F(0) & (x)(F(x) --> F(Sx)) --> F(z)] and F(0) & (x)(F(x) -->
F(Sx)). The second in the antecedent of the first, so F(z); but then
F(z) --> F(Sz), so F(Sz). QED.

McCall's proof of induction depends, as he notes (end of section 5),
upon its being in this particular form.

Richard

-------------- next part --------------
An HTML attachment was scrubbed...
URL: </pipermail/fom/attachments/20150305/3560dd9d/attachment-0001.html>
```