# [FOM] Notations in mathematical practice

Arnon Avron aa at tau.ac.il
Thu Oct 29 06:03:00 EDT 2015


On Mon, Oct 26, 2015 at 09:18:08AM +0100, Arnold Neumaier wrote:
> On Sun, October 25, 2015 09:21, Arnon Avron wrote:
> > In his posting on free logic, Harvey Friedman made
> > the following side remark:
> >
> > "Mathematicians just want to make sure that there is no practical
> ambiguity in what they write."
> >
> > Do they??
>
> Yes, they do, since they want to communicate efficiently without losing
> time and patience to spell out the obvious.

If so - can you  give me a reasonable explanation why they are ready
to lose time and patience by writing terms like {x|x^2>0}, instead of
the set x^2>0'? is the meaning of the latter in any way less "obvious"
than the meaning of the function x^2'??

> > So no mathematician that respects himself would talk about
> > "the set x^2>0", only about "the set {x|x^2>0}".
>
> But the latter is still far from formally correct, since it fails to
> specify the ground set from which x is taken. Thus even you communicate in
> your formulas only what you think is sufficient for successfully making
> your point, and leave the remainder to the context. So you shouldn't
> criticize mathematicians for carrying this even further!
>
you confuse here the use of
abbreviations, macros, and agreements how to shorten expressions
(or sometimes even make them longer but easier for people to grasp),
with the use of ambiguous expressions that might have two different meanings
(sometimes within the same paragraph!). Notations of the
second type are extremely dangerous, and should be avoided. In contrast, notations
of the first type are something we cannot do without,
and are  harmless as long as  there are obvious, simple
algorithmic directions for  how to turn them into the formally correct expressions
they abbreviate.  So of course I and everybody else will write just {x|x^2>0}
in contexts in which it is clear (usually via explicit
declaration at the beginning of the text or a generally accepted agreement,
like that unless otherwise stated, n and k varies over N)
over what domain x varies. However, in case in which  there
is a danger of confusion I (and everybody else) would immediately restore
the full correct term using the directions mentioned above.
I would be  very curious to get from you a list of formal, simple directions
of the same type  that would allow students to decide,
when they come across an expression like "f(x)", whether it refers
to some function or to some number!

> > But the same mathematician
> > would talk about "the function x^2", not about "the function \lambda
> x.x^2".
> >
> >   Is there any chance of changing this?

> There is no need to change this. It would make things only clumsy.

Again - can you give me s reasonable explanation why
talking about the set x^2>0' would make things "clumsy", while
talking about the function x^2' does not? Here too I am really
curious to hear one!

the proof of a simple formula like (a^b)^c= a^(b*c) (where a,b,c
vary over cardinal numbers) in usual textbooks on set theory.
The proof as presented there is almost unreadable, and it is
very difficult to fully follow and grasp. (I remember how
desperately I had to fight with it at my first year as a student
of math).  This is happening
even though  the proof is really rather simple, and once
one uses the correct notations, it boils down
to a straightforward, very easy to follow, *computation*
(concerning Currying and UnCurrying).

> A typical mathematician reads and writes the customary informal
> mathematical exposition far quicker than any of its full formalizations,
> and probably obtains as a result much more insight.
>
> In contrast, a proof assistant gives (at a much higher cost) only a
> guarantee of correctness, but almost nothing beyond that. But it is the
> insight that makes the mathematics, not the truth (which is just a
> necessary requirement).

You keep talking about proof assistants (which you obviously abhor),
while I (being a teacher in my university with a lot experience) have in mind
first of all the *students* of math (and related disciplines) -
*as well as professional mathematicians*.
It is just a fact that the confused notations I talked about
cause students (even the very best ones!)  deep misunderstandings,
and they make a lot of mistakes because of it. Actually -
so do professional mathematicians from time to time. They do not
do the mistakes at the places students do it. However - in more advance places
they unfortunately do!

Anyhow, from the content of your answer and the way this content is
conveyed, it is clear to me that I am not going to convince you
that there are  notations which are absolutely and objectively
clumsy and misleading, and the only reason that you refuse to
recognize them as such is that you got used to them.
So let us agree that we totally disagree...

Arnon Avron