[FOM] 605: Integer and Real Functions
Harvey Friedman
hmflogic at gmail.com
Thu Aug 27 04:12:38 EDT 2015
WARNING: I am entering into some important territory some of which
lies outside my competence, and I hope this provokes interest among
readers who are competent here. I indulge in this because I think
there are some missed opportunities of a logical nature here.
**********************************
There is a fundamentally important area that raises all sorts of challenges.
That is the identification of natural functions on the integers and on
the reals.
In fact, one of the intriguing issues that there are very natural such
constructions one the integers (reals) which do not have a similarly
natural construction on the reals (integers).
I think there is something deep here that we have not been able to get
at. Let me give several examples.
1. Addition and multiplication on N. Here the situation is extremely
nice. At least for the nonnegative integers and nonnegative reals.
Addition on N serves a very fundamental purpose. COUNTING finite sets.
#(A union B) = #A + #B, PROVIDED(!) A,B are disjoint.
Multiplication on N serves a very fundamental purpose. COUNTING finite
sets. #(A x B) = #A x #B.
Immediate observation: the addition situation is slightly more
involved than the multiplication situation. We need to put the
disclaimer that A,B are disjoint. No such is needed for
multiplication. Is this a significant difference, and what are we to
make of it? OK, we can fix this by
#(A union B) = #A + #B - #(A intersect B)
and this leads to the well known more complicated formulas
https://en.wikipedia.org/wiki/Inclusion–exclusion_principle
What do similar formulas systematically look like involving Cartesian
product? Do we wind up with some special kind of Diophantine theory
here? Maybe an interesting decidable fragment?
Another observation: we had to go to two dimensions to get
multiplication. You can't just be counting one dimensional finite sets
here.
Now a very interesting thing logically happens when we go to A,B,C.
You would think that we need to work with triple unions and triple
products, where the latter involves going to three dimensions. I.e.,
we need to consider triple addition and triple product. But this is
not the case, at least logically. This is because of the great deep
laws
A+B+C = (A+B)+C
AxBxC = (AdotB)dotC.
which is really the associative laws in disguise. And we also have the
miraculous commutative and distributive laws as well. These are
normally proved by induction on N. HOWEVER, the set theoretic versions
A union B == B union A
A cross B == B cross B
A cross (B union C) == (A cross B) union (A cross C)
DONT need any induction, even for general sets!! This is because the
bijections can be treated explicitly.
What are we to make of all this, logically? How do we want to treat
the similarities and differences between elementary arithmetic of N
and elementary finite (or even general) set theory?
E.g., we have the delicious question of what sentences of elementary
arithmetic can be proved using finite set theory with almost no axioms
in this way? I.e., the method of explicit bijections. More generally,
the method of explicit mappings to handle inequalities.
Obviously, this statement can be proved by the method of explicit
mappings: "There is no set with at least 2 elements, A, such that #(A
x A) = #(A)".
CONJECTURE. "There are no nonempty finite sets A,B such that #(A x A)
= #(B x B union C x C) and #(B) = #(C)" cannot be proved by the method
of explicit mappings.
Presumably, one goes about this by getting clear about what axioms are
allowed, and showing that it has a model that has the right stuff in
it.
2. Addition and multiplication on R|>=0. First of all, there is no way
that finite sets can be used to naturally generate real addition or
multiplication.
For addition, the most basic thing is to use lengths of rods. Take one
rod of length x and one rod of length y and put one after the other,
and the length of the resulting thingie is x+y. So we get #(A followed
by B) = #A + #B. HOW COME we don't get into trouble like we did with
addition on N? OR, we weigh two objects. Put them together again you
get #(A with B) = #A + #B. AGAIN, we don't get into trouble like we
did in N.
With multiplication on R|>=0, we take two rods. Now what? We can't do
anything rod like with them to generate #A x #B. Once again, we have
to go to higher dimensions. So what we need to do here is to consider
AREAS to motivate real multiplication. We have similar miracles as we
have in 1 above. But there is the difference that we can solve all
sorts of equations like
#(A x A) = #(B x B union C x C) and #(B) = #(C)
Of course, we have the intermediate value theorem, real closed field
theory, etcetera. But now we can ask this. What happens if you write
down all of the fundamentally meaningful quantity problems. Which ones
can be proved or refuted using explicit mapping ideas?
3. Exponentiation and factorial in N. There is a fundamental COUNTING
motivation for base 2 exponentiation in N, 2^n. Namely, the power set.
#(POW(A)) = 2^n. So we are going very nicely with just thinking about
finite sets. Now with union, intersection, relative complement, power
set, and counting, all with just finite sets, we have a tremendous
amount of nice things happening. And it all just seems to need the
method of explicit mappings. What can you prove with the method of
explicit mappings? No induction here.
Also, there is a more general construction - A^B = the set of all
functions from A into B. This also can be used, and a tremendous
number of satisfying things are going to be accessible to the method
of explicit mappings. Also we can consider as part of the mix, very
basic subspaces of A^B. E.g., the one-one maps.
This leads immediately to another function in addition to our n^m
function arising from the greatly natural A^B construction in finite
set theory. This is n!, the number of one-one elements of A^A. So
turning this around, look at the fundamentally simple set theoretic
constructions such as the one-one elements of A^A, and see what
arithmetic functions you get.
So we know that 2^n and n! are not only natural in N, they in fact
serve a fundamental purpose - to count the number of elements in
absolutely fundamental combinations of two sets.
4. Exponentiation and factorial in R|>=0. Now things get more
problematic and controversial. What fundamental purpose does R|>=0
exponentiation serve? Well, since N exponentiation serves such a
fundamental counting purpose, we might expect that R|>=0
exponentiation serves an analogous fundamental length or at least area
purpose. WELL, this does not appear to be the case. We ought to be
able to prove a very strong theorem that it cannot serve such a
purpose.
Now actually, there is something suspiciously close to an area
interpretation of R|>=0 exponentiation. Namely, let f(x) is the area
under 1/x from 1 to x, x > 1. Then this is ln(x). This is the inverse
function to e^x, essentially. But there are three funny things going
on here. First, this is an inverse function, and not the function we
are after, which is exponentiation. Second, what the heck is this base
e instead of base 2? Third, why the heck does 1/x come into the
picture?
The most commonly cited fundamental purpose is compound interest
https://en.wikipedia.org/wiki/E_(mathematical_constant)#History
Instead of adding interest at a given rate annually, you can add it
more frequently at an annualized rate. Then you can take the limit and
get the theoretically continuous compounding. Your account will have
e^Rt units starting with 1 monetary unit with continuously compounding
interest rate of R monetary units per 1 time unit. This all has a well
known mathematical model with appropriate limits as n goes to
infinity.
Of course, all of this is far far far more subtle than the situation
with 2^n on N. But at least it is a fundamental purpose.
Anothert fundamental story as to how R|>=0 exponentiation comes about
is through a simple and very basic differential equation. ? What
fundamental purpose? The usual answer that I remember is the
differential equation
f(x) = f'(x), f(0) = 1.
You don't get 2^x so fast, and there is the question of the most well
motivated way to get 2^s in the real context. (I think there is a nice
answer, better than the obvious CV stuff, but I forgot it). And as
opposed to the situation in N, 2^x doesn't look so fundamental.
There is of course x^y, x,y > 0 in R. What is the most obviously
fundamental story for this? Then of course, you can get 2^y just by
setting x = 2 and reminding everybody that 2 is a fundamental number,
both as a real and as an integer.
There is another fundamental way of getting real exponentiation. The
above differential equation does look like a fundamental purpose I
think from elementary mathematical physics. I think the following may
be viewed as different, although it perhaps can be unified.
f(x+y) = f(x) dot f(y).
The only solutions are f(x) = e^cx, the others being in the deep
pathology class.
AHA! The above equation can also be used fundamentally in N. The only
solutions there, are f(n) = 2^cn, c in N. NOTE that there are no
pathological solutions in N.
Now something else deep happens that distinguishes R from N. There is
a simpler functional equation:
f(x+1) = 2f(x), f(0) = 1, x in R|>=0.
In N, we have exactly one solution: 2^n. What about in R|>=0?
Lots of even continuous solutions here. This didn't happen with the
two variable functional equation f(x+y) = f(x) dot f(y), in R|>=0.
So it seems like N likes functional equations with one variable,
whereas R does not like functional equations with one variable.
So now I close this food for thought with the n!. in N, we have
f(n+1) = (n+1)f(n), f(0) = 1
with the unique solution n!.
In R|>=0 this equation has lots and lots of even analytic solutions.
So what we would like to have is a TWO VARIABLE functional equation
with a initial condition that uniquely determines factorial on R|>=0.
And then see what that two variable functional equation means on N.
And even if it does NOT mean anything on N, then try to argue that on
R|>=0, it still serves a fundamentally important purpose.
I gather that there is some use of integration to give some
characterization of factorial on R|>=0, but I am not aware of what
fundamental purpose can be ascribed to satisfying that equation.
Is there any satisfying multivariable functional equation that is
satisfied by n! on N? Now that I think of it, there must be some that
surround the use of C(n,m). Maybe the readers can fill me in on this,
with a multivariable functional equation that does exactly what we
want it to do for factorial on R|>=0?
5. f = gog.
This is the functional square root equation for g. Given g, find a
natural f with this equation. This is closely related to the above.
For which basic g can be find basic f? You can do this for f(x) =
ax^b. You get another function g(x) = cx^d, which can be easily
determined.
QUESTION: Find some natural classes of functions, X, such that for all
f in X there is a (unique) g in X such that f = gog.
Horrible things seem to happen when you look at, e.g.,
2^n = g(g(n)) on N
e^x = h(h(x)) on R|>=n.
The intuitive idea is clear. Let f(x) be the result of doing something
to x. Then g(x) should be doing it half way. When, there, and how does
this make sense?
************************************************************
My website is at https://u.osu.edu/friedman.8/ and my youtube site is at
https://www.youtube.com/channel/UCdRdeExwKiWndBl4YOxBTEQ
This is the 605th in a series of self contained numbered
postings to FOM covering a wide range of topics in f.o.m. The list of
previous numbered postings #1-599 can be found at the FOM posting
http://www.cs.nyu.edu/pipermail/fom/2015-August/018887.html
600: Removing Deep Pathology 1 8/15/15 10:37PM
601: Finite Emulation Theory 1/perfect? 8/22/15 1:17AM
602: Removing Deep Pathology 2 8/23/15 6:35PM
603: Removing Deep Pathology 3 8/25/15 10:24AM
604: Finite Emulation Theory 2 Aug 26 14:54:32 EDT 2015
Harvey Friedman
More information about the FOM
mailing list