842: Gromov's "most powerful language" and set theory
José Manuel Rodríguez Caballero
josephcmac at gmail.com
Mon Feb 10 13:26:29 EST 2020
Harvey Friedman wrote:
> Perhaps Gromov is talking about some sort of informal notion of
> language power.
One way to define the power of a language is to consider a spectrum, where
at one end we find the most expressive language and at the other end, we
find the language which is better for automatic reasoning. According to
section 2.1 in , there is a tension between these two extreme properties:
[...] more expressive formalisms are more difficult to automate
These two contradictory criteria for the power of a language
(expressiveness vs automation) may be identified with what Gromov (page 11
in ) calls the entropy barrier (difficulty in expressiveness) and the
energy barrier (difficulty in automation):
Maybe, the simplicity of Kolmogorov's argument and an apparent
> inevitability with which it comes along with translation of baby-Boltzmann
> to baby-Groethendieck is illusory. An entropy barrier on the road toward a
> conceptual proof (unlike the energy barrier surrounding a hard proof) may
> unnoticed by one who follows the marks left by a pathfinder that keep you
> the track through the labyrinth under the mountain of entropy
It seems to me that Gromov is considering expressiveness as his criterion
for the power of the category-theoretic framework. According to him, to
consider information as a number is a rather restrictive point of view and
he proposes to do a Grothendieck-style development of information theory.
There is an echo of E. Noether's approach to algebraic topology here: in
the same way that she suggested to use the Betti group (the complementary
quotient group of the group of all cycles by the subgroup of cycles
homologous to zero) in place of the Betti numbers, Gromov is suggesting to
use an algebraic structure for entropy rather than a number, and this
algebraic structure is of homological nature . For Gromov, a negative
way to define the expressiveness is as the capacity of avoiding
preconceptions (page 11 in ):
All this is history. The tantalizing possibility suggested by entropy -
> this is
> the main reason for telling the story - is that there may be other little
> somethings around us the mathematical beauty of which we still fail to
> because we see them in a curved mirror of our preconceptions.
Indeed, such preconceptions are the features that make a language good for
automation: more preconceptions, fewer possibilities for freedom, hence a
smaller search space for a computer or a human, therefore, better
automation. This is, in my opinion, the reason for the dichotomy between
expressiveness and automation.
 Paulson, L.C., 2019. Formalising mathematics in simple type theory. In
Reflections on the Foundations of Mathematics (pp. 437-453). Springer, Cham.
URL = https://link.springer.com/chapter/10.1007/978-3-030-15655-8_20
 Baudot, P. and Bennequin, D., 2015. The homological nature of entropy.
*Entropy*, *17*(5), pp.3253-3318.
URL = https://www.mdpi.com/1099-4300/17/5/3253
 Gromov, M., 2013. In a search for a structure, part 1: On entropy.
*Entropy*, *17*, pp.1273-1277.
URL = http://math.mit.edu/~dspivak/teaching/sp13/gromov--EntropyViaCT.pdf
-------------- next part --------------
An HTML attachment was scrubbed...
More information about the FOM