FOM: "Was sind und was sollen die Zahlen"

Vladimir Sazonov sazonov at logic.botik.ru
Tue Nov 11 16:38:23 EST 1997


Let me finish description of some aspects of formalizing 
Feasible Arithmetic. (This is a third message on this subject to 
FOM list.) I hope that after reading the following part the idea 
of this formalization will become sufficiently clear. 

In a previous posting I presented argumentation for restricting
abbreviation mechanisms which are usually involved in
mathematical proofs. Otherwise it is hardly possible to
formalize feasible numbers. Note again, that all the details are
contained in my paper "On Feasible Numbers", LNCS, Vol.960
(http://www.botik.ru/~logic/SAZONOV/papers.html). Therefore now
I will give only some very short and schematic notes.

First, I claim that the ordinary FOL (formulated either as
Hilbert or Gentzen style Calculus or as Natural Deduction)
actually involves some abbreviating tool (besides those which we
may and actually use additionally). Best of all this could be
seen in Natural Deduction rules for existential quantification.
Specifically, the corresponding introduction rule

		     A(t)
		--------------
		\exists x A(x)

actually serves as a rule of introduction of a name
(abbreviation) for a term t. This name is just the quantified
variable x. Elimination rule


       	         [A(x)]	 | If (\exists x A(x)) is proved,
       		    .	 | we may temporarily assume A(x)
       		    .	 | and continue proving the goal C.
       		    .	 |
    \exists x A(x)  C	 |
    -------------------, |
       	  C		 |


allows to *use* the abbreviation x introduced in previous rule 
when the second follows, say, immediately after the first.  Let 
us consider this situation as *non-normal* and try to impose a 
requirement that all the inferences should be *normal*. The 
simplest way to do this is using *traditional notion of normal 
inference* which is well known in proof theory. We also may 
consider some more liberal notions of normality. (Details are 
omitted.) The main idea is to abandon using (or to use only in a 
properly restricted way) abbreviations of terms.

This is required because terms are dealing with *objects* of the 
`intended' (possibly very vague) model such as Feasible Numbers.  
Therefore such abbreviations may have too strong and undesirable 
influence on the nature of (objects of) this model.  More 
properly speaking, abbreviations may actually play the role of 
axioms (on existence of big, imaginary numbers like 2^1000) 
which we did not assume to impose on the model. (E.g., if your 
intention is to consider a theory of Hereditarily-Finite sets, 
why to include non-deliberately Infinity Axiom, may be in some 
hidden form?)

On the other hand, abbreviations of formulas and of proofs (if 
they do not involve someway abbreviations of terms) are still 
allowed and even seem desirable as in the traditional 
mathematics. As I mentioned above, some kind of term 
abbreviations may be also quite safe. 

Unfortunately (or may be quite happily?) the above mentioned 
normality restrictions (which not necessary coincide with the 
traditional notion of normality) simultaneously restrict using 
Modus Ponens rule because it can make inference to involve 
*implicit* abbreviations. (Existentiality introduction rule may 
become followed in a reasonable sense by corresponding 
existentiality elimination.) So, it is possible that modus 
ponens applied to normal proofs will give non-normal one. As 
usual, we may prove (by some complicated induction argument) 
normalizability *meta*theorem.  However, we know that the cost 
of normalization may be extremely high (just of non-elementary 
complexity as it was shown by Orevkov and Statman). We cannot 
guarantee that the resulting inference will be feasible. I would 
say differently: After normalization we will possibly get only 
an *imaginary inference*. All of this means that modus ponens is 
applicable, but with some cost.  Moreover, sometimes the cost is 
(practically) infinitely high.  We have the same problems with 
*transitivity of implication*. (I remember some notes of 
Poincare on postulating transitivity *despite* it fails in the 
real world.)

Now I present several *provable facts* in Feasible Arithmetic to 
demonstrate some unexpected effects. 

It is possible to *formally define* two initial parts S and M of 
the `set' F of all numbers of Feasible Arithmetic. Intuitively, 
S are `small numbers'and M are `middle numbers' or numbers 
laying before a `horizon' (Vopenka's informal notion; also note, 
that there was a popular lecture by Kolmogorov where he 
discussed very informally notions of small, middle, and big 
numbers).  Further, 0\in S, 0\in M and there are *proper* 
inclusions

S \subset {0,...,10} \subset M \subset {0,...,1000} \subset F
					\subset {0,...,2^1000}.

There exists the biggest small number. It is 5 or 6 or 7 and we
cannot decide which one exactly, because the underlying logic is
(essentially) classical.

Moreover, *there exists no biggest number* in M (M is *closed 
under successor*), and this is not a contradiction!. 

Any attempt to prove that 1000 is in M (say, by applying modus 
ponens 1000 times) will not succeed because the resulting proof 
will be non-normal and its normal form is non-feasible. The 
bigger is a numeral n, the harder is to prove that n \in M and 
this is practically impossible to prove for n=1000.

It looks very strange, but simultaneously we can prove 
(non-constructively) that M *has the last number*! 

This is not a contradiction with the fact that M is closed under 
successor because we can get a contradiction from any A and ~A 
(defined as A -> falsity) only by modus ponens which is too 
difficult to `apply' in our case.

We may define *feasible real numbers* between 0 and 1 in binary
notation as sequences like 0.011001111100100101... containing
the `middle' number of digits and defined as maps of the type
M -> {0,1}.  (We may represent such maps as restrictions to M of
maps {0,...,1000} -> {0,1}.) This way we are getting the notion
of *feasible continuum* which is BOTH *continuous* (because
there is no last digit in such real numbers) AND *discrete*
(because simultaneously there should be the last digit). 

This seems to me promising.  Is not this a possibility for a new 
*Feasible Non-Standard Continuous&Discrete Analysis* which would 
be `good' for computers and probably for physics? Note, that in 
contrast to the ordinary Robinson's approach `non-standard' 
features arise here at the very beginning by the very nature of 
(feasible) natural numbers considered. 

Of course, this is a simplest possible version of formal 
feasibility theory. It may be considered as something like a 
curious or just as an exercise for estimating the cost of 
cut-elimination/normalization. It is necessary (and hopefully 
possible) to do much-much more work to get a theory which will 
be technically more useful and interesting.  Nevertheless, even 
this theory seems to illuminate something essential on 
feasibility.  At least, instead of fantasizing on feasibility or 
Heap Paradox, etc. we can try to *prove* some theorems or 
metatheorems. I think this is more fruitful because it is governed by 
some kind of formal logic.  Finally note, that this approach to 
vagueness differs essentially from the well-known considerations 
on fuzzy logic, etc. It also may be considered as some kind of 
feasible complexity theory. 

 
Vladimir Sazonov



More information about the FOM mailing list