# FOM: counting

Brian Rotman brotman at qn.net
Thu Aug 27 13:26:17 EDT 1998

```fom.doc 5/14/98

Patrick Peccatte (FOM 3/18) quotes some passages from my book AD
INFINITUM ... about reconceptualizing the sequence 1, 2, 3, ... and
suggests comparing the ideas behind them to those repeatedly expressed
on FOM by Vladimir Sazonov on the nature of natural numbers. The
suggestion is interesting. I have a lot of sympathy/solidarity with
Sazonov’s revisionism, his refusal to supress doubts about the orthodox
understanding of the numbers, his insistence that we try re-think them
from a distance, but I have a few problems with what he says or maybe
how he says it.

First, a point of rhetoric: it’s not a good idea for Sazonov to attack
the classical natural numbers or their conceptualization as unclear,
indeterminate, illusory or unintelligible when it is precisely their
clarity, determinacy, reality and intelligibility that makes them such
an attractive, stable and universally accepted object of thought for
generations of mathematicians. He would do better to question their
claim to be the unique and impossible-to-think-otherwise idealization of
the small, empirical magnitudes which anchor their meaning. Better, in
other words, to attack what it means for mathematics to idealize
palpable reality.

Second, designating, as he and others do, a huge ‘number’ such as 2^1000
as a remote upper bound to some feasible initial section far below it,
has two problems: it is arbitrary in an unhelpful way and it is
conceptually counter-productive. 1) On arbitrariness, the question goes
beyond the obvious why this bound rather than any other, but why the
deliberate remoteness. Presumably, the idea is to make an unbridgable
gap between messy empirical questions of length of calculation, proof,
definition, memory, and so on, and the purely mathematical investigation
of the feasible numbers one wants to focus on. But doesn’t this wash the
baby out with the bathwater. Surely, such a bound can only do real --
critical/conceptual/foundational -- work if it is an intrinsic
construct, if it arises from the properties of the feasible numbers as a
natural or inherent limit, or at the very least, is causally linked to
them in some specifiable way. Perhaps Sazonov’s loglogx < 10 law is a
move in the direction of intrinsicality. But, as he gives it, it is
still arbitrary. Why not some function other than loglog? And why 10? A
more intrinsic formulation might be loglogx < r where the logarithm is
taken base r; but then arbitrariness reappears in the choice of r. 2) It
is misleading and counter-productive if it gives the impression that the
feasible numbers can be identified with an initial section of the
classical progression of natural numbers. From Sazonov’s contention (FOM
3/27) that feasible arithmetic, as he understands it, has no model in
ZFC, I don't think he wants that impression to be given, but I’m not
clear. If he doesn’t want the feasible numbers to be identical to an
initial section of the integers classically conceived, he shouldn’t talk
of 2^1000 as being a bound without qualifying what that is supposed to
mean and shouldn’t talk as as if it were identical to the classical
number so named. The point is subtle since it bears on the very
question, namely the meaning of ‘number’ and ‘finite’, that feasibility

Of course we already know what finite means: a set is finite (within
first-order set theory such as ZFC) if it lacks a bijection onto a
proper subset. But this characterization has deep problems from a
foundational point of view. It requires an intuitive or informal notion
of finite to be in place to even set up what a first order axiom system
is. It makes finitude logically inseparable from infinitude (each is the
negation of the other), and so blocks off the possibility of it serving
as a basis for discussing infinity. Most importantly, it supports a
privative or negative definition of ‘finite’, since from the orthodox
view, which starts from an already present or given series of ‘the
natural numbers’, each initial section cannot but be understood as a
falling short or truncation of the full and priorly given infinite
series. The fundamental question then becomes: how can a non-privative,
positive finite be conceptualized? A finite that instead of being
understood top-down from infinity is constructed bottom-up from zero.
Put another way, we are confronted with two conceptions and ontologies
of the finite. The negative: that which falls short of a prior infinite,
an idea entirely consonant with Plato’s picture of the sensible world as
an imperfect copy of a prior heaven; the positive: that which is the
given condition for the possibility of the infinite, an idea consonant
with the prior materiality and ‘finitude’ of our sensible bodies.

One obvious source for bottom-up finite is counting: replace remote-N
feasibilism by identifying the feasible numbers as those which are
empirically possible, those which the material constraints of the
universe allow to be counted into existence. This could be made more
precise by having the counting process executed by an ideal machine
operating at the thermodynamic limits of action, and by appealing to the
known finitude of the universe (of energy, for example) to arrive at an
intrinsic limit to feasibility. Two such limits, which I won’t elaborate
on here, are 10^96 or 10^(19^98), depending on whether one integrates or
differentiates energy usage respectively. But such a procedure, however
interesting, is not mathematics; it is physics. Mathematics requires
idealizations controlled by formalisms that are autonomous, cut free
from empirical questions.

Why not, then, try to conceptualize (in order to axiomatize) the feature
of counting from zero in evidence here and common to all countings ever
performed or performable in this universe, namely necessary cessation,
by writing down a suitable set of arithmetical consequences of
cessation. This is what I do in my book. I call the numbers that can be
counted into existence from zero the iterates, and I take the principal
consequence of the fact that such counting cannot go on ‘for ever’ to be
the phenomenon of exit points -- cuts or regions in the iterates -- at
which each arithmetical operation undergoes a discontinuity. Thus for
large enough iterates n, it will be the case, that is, one takes it as
an axiom, that n+n will no longer be an iterate; similarly for
multiplication, and so on. Such non iterates will form a domain of
transiterates whose arithmetical properties will be constrained only by
their being sums, products, etc of iterates. The result is a large class
of models of arithmetic which (for reasons I won’t go into here) I call
non-Euclidean, whose immediate difference from the standard model is
that they are already ‘non-standard’ in that each is bifurcated into an
initial, well-behaved section of iterates followed by an unruly and
counter-intuitive domain of transiterates with a rich and complex
structure.

The idea is simple enough and seems totally obvious, but I had to
overcome a great resistance (from my classically trained self whose
attachment to ‘the’ natural numbers was deeper that I realized) to get
to it. Thus, the bulk of my book is concerned not with axiomatics and
formalism per se but with articulating an apparatus (a semiotic account
of mathematical reasoning/imagining derived from ideas of Peirce) that
allows a reconceptualization of ‘counting from zero’ which doesn’t
collapse back into the great attractor of classical counting. The
structure and details of this apparatus needn’t concern us here, but its
outcome surely should. This is because, by initiating a thinking of
‘finite’ whose motivating intuition assigns a coherent meaning to the
ideogram ‘...’ different from its classical meaning, it makes it
possible to think (i.e. in the present context, axiomatize) a positive
or non-privative finitude -- literally so from the bottom up, from
below, which is, after all, where we all are.

Name: Brian Rotman
Position: Research Scientist
Institution: Ohio State University
Research Interests: foundations of mathematics, semiotics

```