FOM: Standards of mathematical rigour and logical consequence
Vladimir Sazonov
sazonov at logic.botik.ru
Mon Oct 26 13:38:26 EST 1998
Charles Silver wrote:
>
> On Sun, 25 Oct 1998, Vladimir Sazonov wrote:
> > It seems that at present practically everybody agreed with the
> > framework based on set theory so that no problems with the
> > rigour arise in everyday mathematical practice. Those who
> > knows formal rules of predicate calculus may have somewhat
> > different, detailed standards. But this is a paradise only for
> > those who do not try to look outside this framework or who is
> > irrelevant to length of (imaginary?) proofs and to uncontrolled
> > using implicit abbreviating mechanisms in the proofs.
>
> I find your finitistic viewpoint very interesting, but there are a
> couple of things I don't understand.
> You seem to accept first-order
> logic,
First, I accept the *language* of FOL just because we need some
expressive (in a naive sense) language.
> which I think indicates that you accept the relation of logical
> consequence in first-order logic. Normally, this relation is defined
> set-theoretically, and I believe--please correct me if I am wrong--that to
> define logical consequence requires the existence of an infinite set.
Before a set theory and then model theory and Goedel
completeness theorem arose, say, in the time of Euclid there
were no *set-theoretically defined* logical consequence
relation. There were essentially syntactical (in a broad sense
of this word) rules of "correct" reasoning. Newcomers learned
these rules by training, i.e. as given and, of course by some
appeal to geometrical and other intuition. These rules arose
(due to also some peoples, professional mathematicians)
according to and SIMULTANEOUSLY WITH creating this intuition.
Each newcomer simply repeat in a shorter way this creation
process with the help of a teacher. But he, of course, more
learn, sometimes even grind than create himself.
What is said is essentially valid even today for the most of
mathematicians who know nothing (or forgot, ignore what they
learned) on mathematical logic and on Goedel completeness
theorem. Nobody, except logicians, need and use
set-theoretically defined logical consequence relation in the
everyday mathematical practice. But each mathematician knows
sufficiently well what is a "correct" proof, he knows some *rules*
of mathematical reasoning. That is why formal side of
mathematics (of course together with corresponding intuition and
informal semantics!) is the most crucial feature, just
determining the essence of mathematical rigour. Logicians
know these rules in more details, more explicitly. We
could say that they have somewhat better standards of
mathematical rigour. When it is necessary mathematicians also
use quite explicitly and consciously such a high standard (say,
in manipulating algebraic formulas or integrals and in proofs
which are more like to calculations). If some new mathematical
subject will be more sensitive to (the degree of) mathematical
rigour, they will use appropriate *completely formal* calculus.
Also mathematicians always will try to find a way to present even
completely formal proofs informally. This is more understandable.
What is secondary for mathematics, it is that with the help of
set theory it is (occasionally) definable formal or mathematical
semantics of logical calculi and there are provable some
theorems relating syntax and semantics which have some
*correlation* with the naive intuition of mathematicians dealing
essentially (may be nonconsciously) with the same formalisms.
These correlations are very interesting and important, but they
do not have any absolute value. We should never mix two kinds of
semantics:
(i) the primary, informal (actually vague) semantics and
(ii) the secondary, formal mathematical (say, semantics of
A.Tarski for FOL or denotational semantics of D.Scott for
type-free lambda calculus).
I see such a mixing when in a general context (i.e. not in the
framework of ZFC or the like) somebody says on "standard" or
"unique, the same for all of us" model of PA. Who can know what
I mean when thinking informally on natural numbers to say that
they are the same as his own? And what does it mean that they
are the same for all of us? We all have only some correlations
in our understanding. The only objective or most solid thing is
a formalism which we altogether agreed to consider and, say,
some practical experiments confirming in such and such concrete
sense some essential (not necessary all) postulates or theorems
of the formalism.
Thus, in principle it is possible that mathematicians (or
logicians which are also mathematicians) will formalize some new
intuition by some new formalism based on some new logic (may be
close to FOL) which will not have the ordinary semantics in
set-theoretic terms at all. May be there will be found a
mathematical semantics of this formalism in quite unusual terms
(as it was somewhat inexpected the mentioned above "continuous"
semantics of D.Scott) or it will not be found soon (again as for
lambda calculus which existed rather long time without a
mathematical semantics, except probably operational one in terms
of conversion/reduction). Anyway, we have a formalized
intuition, and this is the *necessary and sufficient condition*
to do a mathematics in principle. Any mathematical semantics in
any terms serves only as something additional, illuminating,
clarifying, helping our intuition, something extremely
important, but not replacing the informal intuition.
An example of such a formalization, FEAS, of the intuition on
feasible numbers was very roughly outlined in my posting to fom
on 22 Oct 1998. It was crucial there that any proof in a
formalism should have feasible size (say, written on a number of
physically existing sheets of paper or in hard disk of a really
existing computer). Also the proof should be completely formal.
Say, any abbreviations may be used only if they are already a
part of the formalism considered. More precisely, it is only our
informal view that some kind of formal rules serve as
abbreviations. Formally, they are the same rules as modus
ponens, reductio ad absurdum, etc. But if we postulate *some*
abbreviation rules in our formalism and say nothing about others
then these others abbreviations are simply not allowed at all.
(Compare: If Choice Axiom is not postulated then it is not
allowed to use it even implicitly. In FEAS it is allowed in
principle to abbreviate formulas and even (sub)proofs, but not
terms. What could this mean exactly is a separate question and
may be formalized in different ways.) As a result, it is
possible that given formalism may be feasibly consistent, but it
becomes inconsistent if some forbidden (i.e. not postulated
explicitly) abbreviations will be used.
Note again, that (for formalizing a reasonable logical
consequence relation) it is unnecessary to have in advance any
Tarski-model- or Kripke-model- or Kleene-realizability-model- or
set- or semiset- or fuzzy-set- or I do not know which theory. We
need only some initial, possibly very vague intuition on the
intended universe of our formal reasoning (such as feasible
numbers) and a serious desire to find a reasonable formalism
(here - FEAS) which corresponds to this intuition and which will
make it a *mathematical* one. The problem to find corresponding
model theory is, of course, very interesting. But at present,
for the case of FEAS and the like, it is difficult to say
anything definite on it. Moreover, as I wrote on 22 Oct 1998, it
is provable in FEAS both A and \neg A for some concrete formula
A (without a formal contradiction; not every sentence is
provable in FEAS!). This means that we have formalism with some
very unusual notion of truth, or, better to say, with unusual
features of correspondence, adequateness of the formalism to
intuition. Further experience with this and analogous
formalisms will show whether these features are a misfortune or
may be some advantage. I tried to give an argument in favor of
the latter in my posting on 22 Oct 1998. Anyway, *some*
(feasibly) consistent formal theory describing our intuition on
feasibility is presented and we may continue to experiment with
it.
At least, we may consider this simply as an *exercise* in
creating a formalism starting from a simple and raw intuition.
Who can suggest anything better?
In the case of (now) traditional mathematical formalisms like
Euclidean geometry or Peano arithmetic or ZFC we get systems
more stable with respect to length of (even imaginary) proofs.
We also have a good model theory for formalisms of this kind.
Of course the situation here is at present much more clear and
convenient for us than in the case of feasible arithmetic. But
we know that this "paradise", say, in geometry and set theory
existed not always. Why should we hope on a "paradise" with
feasible numbers right at the beginning having actually only
very sporadic (even since the first work of Parikh in 1971)
experience with formalizing this concept?
Let me note also existence of some other not mentioned yet
related works on feasibility (the length of proofs aspect) by
Robin Gandy, Vladimir Orevkov, Alessandra Carbone and Stephen
Semmes. There are also some other works on feasibility of a
different character.
> (If
> logical consequence is to be defined some other way, I think you will
> still need an infinite bunch of things.)
As I wrote above the logical consequence is defined via some
*formal* rules corresponding to intuition and simultaneously
making the intuition to be more advanced, more mathematical.
Probably, *feasible* (or "additive") numbers (which are
intuitively closed under successor and sum) constitute such an
infinite "bunch of things" you ask, as well as so called
*middle* numbers (which all are < 1000 and, nevertheless, also
closed under successor; cf. again my posting on 22 Oct 1998). We
may also consider another "infinity" of "polynomial numbers",
etc. What is the *precise* relation of these infinities to
logical consequence, I do not know. I know only by examples and
by intuition that completely formal and rigorous system FEAS has
some reasonable very informal interpretation in the "infinity"
of "all" feasible numbers.
> In fact, I think it is also the
> case--again, please correct me if I am wrong--that you'll need to say
> something about *all subsets* of an infinite set. I am not claiming in
> this latter case that you will strictly need a *transfinite* set, but I
> believe you will still need to give an account of *all subsets* of an
> infinite set (which may be difficult without postulating a transfinite
> set).
>
> In short, I can't see how you can speak intelligibly of
> first-order logic within a finitistic framework.
By formal and intuitively reasonable, plausible *finitary* rules.
By the way, if we are "living" in a world with *partial*
(recursive) exponential function, can we prove completeness of
the classical FOL or even Propositional Calculus in this world?
(Compare with the question NP=?Co-NP.) Alternatively, is FOL
"really" complete? (Cf. also my paper in LNCS 118 (1981)). May
be we "really" should have a kind of incompleteness of FOL? Or
should/can we just consistently(?) *postulate* completeness
which also seems plausible and is very desirable?
Vladimir Sazonov
-- | Tel. +7-08535-98945 (Inst.),
Computer Logic Lab., | Tel. +7-08535-98953 (Inst.),
Program Systems Institute, | Tel. +7-08535-98365 (home),
Russian Acad. of Sci. | Fax. +7-08535-20566
Pereslavl-Zalessky, | e-mail: sazonov at logic.botik.ru
152140, RUSSIA | http://www.botik.ru/~logic/SAZONOV/
More information about the FOM
mailing list