[FOM] constructivism and physics

Neil Tennant neilt at mercutio.cohums.ohio-state.edu
Fri Feb 10 11:09:46 EST 2006


Harvey Friedman's post of Thu, 09 Feb 2006 03:03:36 raises some
interesting f.o.m. considerations concerning the debate over the adequacy
of constructive/intuitionistic mathematics for (all and any) applications
within empirical science.

I think I need to clarify, however, some respects in which the main
argument of my paper is not affected by these considerations---interesting
though they might be, in their own right.

I would readily concede that the characterization of constructive
mathematics given by Bridges would be contested by many an intuitionist or
constructive mathematician. Their mathematics is not necessarily captured
as the closure, under intuitionistic logic, of a set of axioms on which
both they and the classical mathematician can agree. [Interesting
foundational question: what is the crucial difference, for the
intuitionist, between conventional intuitionistic mathematics, and what
can be recovered of it in, say, IZF(C)?]

My paper, to be sure, speaks to one of Bridges' persuasion; so, if the
reader is not of that persuasion, the main argument needs to be
qualified. But I believe it can be qualified without any loss. 

Take a set A of mathematical axioms on which both the classicist and the
intuitionist can agree. This could be done by using only the existential
quantifier, and avoiding use of the universal quantifier; and sprinkling
double-negations wherever the intuitionist insists they should appear. For
the classicist, these tinkerings will be of no consequence, since the
axioms written down under these restrictions would be classically
equivalent to the original ones that the intuitionist was unwilling to
accept. (In this respect compare the axioms of "intuitionistic ZF(C)" with
those of classical ZF(C).)

The main aim of the paper referred to in 

http://www.cs.nyu.edu/pipermail/fom/2006-February/009734.html

was to show that intuitionistic (relevant) *reasoning* (codified in a
logic such as IR) suffices for all applications of mathematics (as
codified by the set A of axioms) that could be made in the empirical
sciences. The argument proceeds via an analysis of the overall logical
structure of scientific theory-testing. A peculiar consideration, it turns
out, is that the strictly classical *theorems* one might have derived
from A (for applications to make testable predications from one's
scientific hypotheses) could well "disappear" in the proof-normalization
process that serves up an IR-proof of absurdity from [A plus those
hypotheses plus statements of boundary conditions, ... etc.]. The
observational content of the theory being tested is, as it were,
*intuitionistic-relevantly* accessible on the basis of just the *axioms*
A, and not on any *theorems* derivable only *classically* therefrom.

One can therefore sidestep the whole debate over whether such-and-such
theorem of mathematics that finds application within some central (or
arcane!) part of theoretical physics has no (or no known) intuitionistic
proof. The whole point is that the *classical* provability of such results
is not in principle necessary in order for the *axioms* to deliver the
empirical content of the scientific theory. That empirical content will
always be *constructively extractable*, using only intuitionistic relevant
logic.

The point at which the current debate between John Burgess and myself
becomes pertinent arises when one tries to reflect, philosophically and/or
methodologically, on whether the points made above provide any
justification for the view that one should use only intuitionistic
relevant reasoning, and abjure all classical and/or irrelevant reasoning.
In other words, should one be an IR-reformist? (what Burgess calls a
"perfectionist"---one of the nicer labels that could be applied to the
position in question).

Burgess appeals to the alleged blow-up, upon normalization, of the very
proof of the result on which I base my in-principle considerations. That
result says that the prima-facie classical reductio of [Axioms +
Scientific Hypotheses + Boundary Conditions + ...] can be normalized and
then relevantized so as to become a reductio within intuitionistic
relevant logic. It is the normalization (or cut-elimination) that brings
with it the danger of exponential blow-up in proof-length.

There are two issues here:

1. Does this blow-up actually occur in the proof of my main metatheorem?
(I am more sanguine than Burgess is, that the meta-proof of the
result in question *using IR* as one's metalogic, will turn out to be
surveyable.)

2. Even if my own sanguine conjecture regarding (1) proves to be
incorrect, would this really be reflectively disabling or undermining?
What is the dialectical situation here? Am I really to be prohibited from
using Cut in order to provide a proof *to the person who accepts
and refuses to give up Cut*, of my result to the effect that Cut is not,
*in principle*, necessary? Why should I be forced to find a surveyable
*cut-free* proof of a result which seeks only to state what is, in
principle, possible by means of unsurveyable (because cut-free) proofs?

At the very least, whatever one's views on (1) and (2), the
"perfectionist" IR-advocate is in a position to provide an explanation to
his non-reforming, quietist, classicist colleague, of why it is that we
resort to the use of Cut all the time. We do so because it puts results
within our reach that would otherwise lie beyond our reach. And the use of
Cut brings with it no danger at all of "going wrong"---of lapsing into
inconsistency or committing a fallacy taking one from a *consistent* set
of premises to a conclusion that does not follow logically from that set.
(The use of Cut can, of course, precipitate fallacies of *irrelevance*,
but these will involve only *inconsistent* sets of premises. Since we are
all assuming that our mathematical axioms are consistent, this worry can
be set aside, at least as far as reasoning within mathematics is
concerned.) 

Likewise, the use of Classical Reductio (or any of its equivalents, such
as the Law of Excluded Middle, or Double Negation Elimination) serves the
purpose of finding *shorter* and surveyable proofs of mathematical
theorems that can "find application" in empirical science. 

Likewise, our tendency to *reify the abstract*, by quantifying over
mathematical objects, can be explained (as Hartry Field has made clear) as
serving the interests of shorter deductions of "synthetic" results within
the empirical sciences. (I shall not pause to try to finesse issues such
as whether spacetime points or regions are nominalistically acceptable,
or whether one is working at ifrst- or second-order.)

All three tendencies:

	Platonizing 		(treating numbers as objects)
	Classicizing		(using Excluded Middle)
	De-relevantizing	(using Cut, by interpolating deductive
				 "halfway houses")

receive a uniform explanation as *pragmatically useful*, without any *risk
of cognitive defect*. Platonizing mathematics conservatively extends
synthetic scientific theorizing (Field); Classicizing produces only
classical theories that are consistent, if their intuitionistic
counterparts are consistent (G"odel-Glivenko-Gentzen); and
De-relevantizing produces only scientific predictions/tests that can, in
principle, be replicated even when using only intuitionistic-relevant
reasoning (my result).

Neil Tennant



More information about the FOM mailing list