[FOM] Convincing Edward Nelson that PA is consistent

Timothy Y. Chow tchow at math.princeton.edu
Mon Jun 18 23:56:39 EDT 2018


Sadly, of course, Nelson passed away a few years ago so there is no way we 
can literally convince him that PA is consistent.  However, in the process 
of writing an expository article about the consistency of PA, I thought of 
a way that "normal" people could potentially make some progress on the 
project that Nelson was engaged in---though of course in the direction of 
showing that PA is consistent rather than inconsistent.

Since Nelson's views were unusual, I have to give some philosophical 
preamble.  I would like to introduce the term "strict formalist."  A 
strict formalist is able to *follow* prescribed syntactic rules and can 
for example verify that a given mathematical proof (of feasible size) 
follows from given axioms, but takes very seriously the claim that the 
symbols and strings do not *mean* anything.  In particular, the strings 
cannot refer to syntactic objects, and an expression such as "Con(PA)" 
cannot mean anything, let alone mean something about what happens if you 
manipulate the axioms of PA according to the rules of first-order logic. 
This is not just because "Con(PA)" is usually formulated as an 
arithmetical statement; even if we use something like Quine's protosyntax 
that is supposed to directly formalize syntactic reasoning, the strict 
formalist refuses to *interpret* protosyntax as "saying" anything about 
syntax.

It's a pretty good approximation to think of a strict formalist as a 
computerized proof assistant.  To drive home the point that strict 
formalists only *follow* rules and don't *reason* about them, imagine 
starting with the symbol "0" and then repeatedly applying the rule 
"prepend an S."  The results, of course, will look like this: S0, SS0, 
SSS0, SSSS0, ....  You and I look at this and can reason that at no point 
will we obtain a string that contains (say) a "+" sign in it.  However, 
this observation is a piece of *mathematical reasoning* and is unavailable 
to the strict formalist.  The strict formalist can verify on a 
case-by-case basis that "S0 has no + in it," "SS0 has no + in it," etc., 
but that is all.

I introduce the concept of a strict formalist as a "baseline."  The strict 
formalist is able to accomplish the main task that formalists are eager to 
affirm, namely recognizing syntactic objects, manipulating them according 
to certain rules, and verifying that the rules have been correctly 
applied, but is an absolute mathematical dullard.  Implicitly, a strict 
formalist could verify that PA is inconsistent by checking the correctness 
of a PA proof of 0=S0, but would not be able to perk up and say, "Hey, 
look at that---PA is inconsistent!"  It is therefore impossible to present 
an argument to a strict formalist that will get the strict formalist to 
affirm (in English) that "this system is *inconsistent*" let alone the 
more subtle claim that "this system is *consistent*" for the trivial 
reason that strict formalists are not in the business of affirming 
mathematical statements in English.

In practice, I don't think any human being is a strict formalist.  In 
Nelson's paper "Taking Formalism Seriously," he doesn't take formalism as 
seriously as a strict formalist does.  At the end, he talks somewhat 
vaguely about proving that some weak system is consistent in a way that 
will "convince the most skeptical of formalists."  Of course, a strict 
formalist is even more intransigent than what Nelson envisages as "the 
most skeptical of formalists," but the point is that the concept of a 
strict formalist lets us see that Nelson is slipping in some extra 
assumptions.  Namely, Nelson is tacitly allowing the possibility that 
*some kinds of mathematical arguments can provide us with secure knowledge 
about syntactic objects*.  The symbols used to record these mathematical 
arguments are not just meaningless symbols, to be manipulated according to 
certain rules; they *tell us something* about the behavior of syntactic 
entities.

To my knowledge, Nelson was never explicit about exactly what mathematical 
principles "the most skeptical of formalists" would accept.  I would 
presume that Nelson would allow us to conclude "PA is inconsistent" if we 
were presented with an explicit proof of a contradiction from PA.  I would 
also presume that Nelson would allow us to conclude that if all we do is 
repeatedly prepend "S" to "0" then we will never get a "+"---although even 
here, I think we have to be careful what "never" means, because Nelson is 
what I would call an ultrafinitist.  The sequence S0, SS0, SSS0, ... 
gradually fades off into meaninglessness because the strings get 
infeasibly large.  Characterizing this "fading off" in a way that is 
mathematically analyzable is notoriously difficult and probably can't be 
done in a fully satisfactory manner; however, one can try to work around 
it by setting some upper and lower bounds on the fading off point (maybe 
2^1000 and 2^100 respectively, using conventional mathematical notation).

Now here is an important step: I maintain that the most plausible way to 
interpret "PA is consistent" in this context is "There is no *feasible* 
PA proof of 0=S0."  Again, exactly what this means is tricky, but the 
conventional mathematical claim that "There is no PA proof of 0=S0 shorter 
than 2^1000 symbols" should imply it.

We're now in a position to pose a question that makes sense to a 
conventional mathematician: What are the weakest mathematical principles 
that allow us to deduce "There is no PA proof of 0=S0 shorter than 2^1000 
symbols"?

What makes this particularly tricky is that "the most skeptical of 
formalists" isn't going to be happy interpreting "2^1000" the way a 
conventional mathematician does.  So we have to be careful when we deal 
with exponential notation, and perhaps even multiplicative notation.  At 
the same time, if we are to have any hope of making progress, we have to 
allow *some* principles that allow us to prove theorems of length n that 
"say something about" syntactic entities of length N when N >> n. 
Otherwise we can't approach "PA is consistent" other than by direct 
exhaustion.  I'm not sure exactly how to proceed at this point, but if 
this issue can be addressed, then I can imagine that in principle, "the 
strictest of formalists" could become convinced that "PA is consistent," 
by carrying out something like the following:

1. Do a computation with 1000 steps to arrive at a theorem that says
that there is no PA proof of 0=S0 of length 1000000.

2. Do a computation with 1000000 steps to arrive at a theorem that says 
that there is no PA proof of 0=S0 of length 1000000000000.

3. Etc.

Here I've arbitrarily picked an "expansion factor" of n -> n^2.  That 
means that by getting up to the square root of the feasibility bound, the 
"most skeptical of formalists" can become convinced that there is no 
feasible contradiction.  I can imagine developing some kind of sliding 
scale, where progressively stronger mathematical principles give you 
bigger expansion factors.  Very skeptical formalists might have to get up 
to the square root of the feasibility bound to be convinced that PA is 
consistent, while less skeptical formalists might only have to go up to 
polylog.

Tim


More information about the FOM mailing list