AI for math
José Manuel Rodríguez Caballero
josephcmac at gmail.com
Sun Dec 5 21:37:16 EST 2021
Vladik wrote:
> An interesting article "Advancing mathematics by guiding human intuition
> with AI", see https://www.nature.com/articles/s41586-021-04086-x
>
Each AI framework needs to be analyzed separately. I will focus on neutral
networks. Motivated by a Lewis Carroll [1] pillow problem on the
probability that a random triangle is obtuse, I trained my computer to
decide, given the sides of a triangle, whether it is obtuse. Without using
any geometric algorithm, the computer managed to classify new triangles by
generalizing the rule from a set of training examples, where the triangles
are given with a label corresponding to whether it is obtuse or not. I used
geometric algorithms for the training, but not for the test.
The success in solving this problem through AI seems to be related to the
geometric structure of the problem, which is explained in this article [2].
My hypothesis was that neural networks are not learning by understanding,
they are just solving a geometric problem. For example, in the case of the
Titanic competition [3], I think that I was able to predict who will
survive from the data with a score of 0.78468 (top 15% of the competition),
because there should be a geometric model of survival in the ships that,
although we do not understand it well, my neural network was able to detect.
To test my hypothesis, I trained the machine to solve a problem that has no
geometric interpretation as far as I know: if a given integer is a
square-free, that is, the only divisor that is a square is 1. As expected,
the trained natural network was not able to provide a rule better than
guessing that given a number, it will be square free (the probability is
6/pi^2).
A cursory conclusion might be that neural networks may be useful for
geometers, but they may be almost useless for number-theoreticians (and
geometers over finite fields). Of course, in some situations, some number
theoretical problems can be reformulated as geometrical problems, e.g., the
Weil Conjectures connecting combinatorics over finite fields with the Betti
numbers in algebraic varieties over continuous fields. Also, not all
branches of geometry may enjoy the same benefit from neural networks, e.g.,
I guess that symplectic geometry will profit more from neural networks than
differential geometry, because rigidity is crucial to "learn" from data.
Indeed, according to Stéphane Mallat [4], the only features that can be
learn from high dimensional data are global invariants. The difficulty to
learn local invariants is related to a problem known as curse of
dimensionality.
Therefore, if mathematics is to be supplemented with neural networks, it
will be to develop an intuition about global invariants in high dimensional
geometric structures. In the study of local invariants or in number
theoretical situations, neural networks do not offer any help.
Kind regards,
Jose M.
[1] *Carroll, Lewis, Curiosa mathematica: A new theory of parallels*.
Macmillan, 1890.
[2] Cantarella J, Needham T, Shonkwiler C, Stewart G. Random triangles and
polygons in the plane. The American Mathematical Monthly. 2019 Feb
7;126(2):113-34.
[3] Machine learning Titanic competition: https://www.kaggle.com/c/titanic
[4] Stéphane Mallat - High Dimensional Classification with Invariant Deep
Networks: https://youtu.be/0nOqTOHNxvg
-------------- next part --------------
An HTML attachment was scrubbed...
URL: </pipermail/fom/attachments/20211205/096d0300/attachment-0001.html>
More information about the FOM
mailing list