# [FOM] When is it appropriate to treat isomorphism as identity?

Monroe Eskew meskew at math.uci.edu
Wed May 20 14:55:33 EDT 2009

```On Tue, May 19, 2009 at 4:19 PM, Andrej Bauer <andrej.bauer at andrej.com> wrote:
> I asked several physicists at my department whether they do anythingin their classroom or research by using the standard epsilon-deltatechnique from analysis. The answer seems to be negative. They alwaysargue informally using infinitesimals. Which makes me wonder why we(the math teachers) teach physics majors all those epsilons anddeltas. They don't need them. They can and do get along perfectly wellwith dx's and dy's. So why don't we teach them dx's and dy's instead?I am pretty certain physicists don't particularly care what logiccomes with the infinitesimals, as long as it works for them.

Perhaps because it's not necessary to go deep into the theory for many
applications.  Usually, they can just work with the derivative
formulae and the notation dy/dx.  Recall that such notation is used in
standard analysis, and it does not imply the existence of
infinitesimals.  The fact that a physicist uses a result without
understanding the mathematical theory behind it, does not impugn the
mathematical theory.  By your reasoning, the physicists may as well go
along with memorizing formulas and ignoring proofs.

One should be careful however, because the subtleties of the epsilons
and deltas end up putting one in significantly different situations,
such as in continuity vs. uniform continuity, convergence vs. uniform
convergence, a family of continuous functions vs. and equicontinuous
family of functions.  If you fudge the distinctions, you end up
treating a quantity as small which in fact can become arbitrarily
large, depending on where you are in the domain.  This can give you
problems when you're trying to do something useful like approximate a
function by simpler functions.

Furthermore, the fact that WLOG we may assume the epsilons and deltas
are rational numbers, is significant.  If we want to approximate
within a certain margin of error, we are talking about a concrete
rational number margin of error, not an infinitesimal which cannot be
found in the real world.  To do this approximation we should find
rational epsilon and delta satisfying our approximation needs, and it
would be good to not take them much smaller than necessary, since the
approximation functions become increasingly complex as you demand
greater accuracy.

Lastly, why would you want to expand your ontology unnecessarily with
infinitesimals?  It seems very un-constructive.

> Classical mathematics creates the wrong kind of mathematical intuitionand expectations for a computer scientist to have. He is much betteroff knowing (also) constructive mathematics, because it fits morenaturally with the nature of computation.
> Here is an example: a well educated computer scientist typically knowsthat a polynomial (with real coefficients) has finitely many roots. Hetherefore naturally expects that there is a thing called "the numberof distinct roots of a polynomial". Surely, such a simple number canbe computed, yes? No.

If one keeps in mind that existence does not imply computability, one
avoids such errors.  Of course the natural question for the
constructivist is, "Why care about things we can't compute?"  Again
with the example of approximation, one may have a classical result
that X is approximated increasingly well by a countable recursive
sequence Y_i.  It may not be (easily) computable exactly how far down
in the sequence we have to go do achieve a desired degree of accuracy.
However, we may let a computer iterate the approximation process for
however long we want.  We may not know how long it will take, but the
abstract classical theory tells us that the process will eventually
come to an accurate enough approximation.  This might be usefully
applied.

> I hardly need to be reminded that there is such a discipline.

I was merely pointing out that the dominance of classical mathematics
in math departments has hardly hindered research in the theory of
computation.

Best,
Monroe

```