[FOM] deautomatization in mathematics

Patrik Eklund peklund at cs.umu.se
Tue Oct 8 02:21:35 EDT 2019

On "technical features of constructive type theories interfere with 


Dear José, Dear All,

What are those "technical features"? One is certainly the technicality 
dealing with dependent types. I pointed this out at the "Church 75" 
workshop at St Andrews years ago 
(http://www.glioc.com/files_glioc/ModernEyesOnLambdaCalculus.pdf). The 
function type 's=>t' indeed requires the type constructor '=>'. The 
sorts 's' and 't' are recognized to reside in some underlying signature. 
But '=>' as a binary operator, on sorts appearing as constants not seen 
to reside in any signature.  The trick we suggest in that paper, "Modern 
eyes on lambda-calculus", and we have used it elsewhere as well, is 
based on the "three-level signature", where sorts on level one become 
constant operators on level two, where in turn '=>' is a binary operator 
in the second level signature. Ground terms (constructed types) 
generated by the level two signature, shift down to level three to 
become sorts (so all sorts from level one come down to level three and 
their are also all the constructed types), where 'application' as an 
operator is introduced. Operators from level one are also brought down 
to level three, but appear as constants. For example, an operator omega 
: s -> t on level one becomes a constant omega : -> (s=>t) on level 

Doing all this removes the need to fiddle with 'binding variables'. 
Undesirable lambda terms will not appear, so renaming and such things is 
not necessary. One may compare this to de Bruijn, and some may see our 
approach as that to the extreme. I don't. In that paper you also see how 
the three-level signature is interpreted e.g. in Schönfinkel's, Church's 
and Curry's structures. For instance, in light of the three-level 
construction it is unclear where Schönfinkel's building blocks actually 
appear. Church's 'iota' type is our 'type' as a sort in the one-sorted 
signature on level two.

Needless to say, 'power types' and such things can be handled similarly, 
connecting structures like "description logic" to lambda calculus rather 
than first-order logic.

What that three-level structure essential does is to clearly show how 
the 'lambda' symbol indeed is an informal one only, as Church said. My 
gut feeling is that quantifiers are similarly informal symbols, but we 
haven't look at it in detail. Here is also where Church's 'o' type (for 
"propositions") come into play. It embraces the 'bool' type needed for 
propositional operators in a "Boolean signature" on level one, where 
'true' is one of the constants of type 'bool', i.e., true : -> bool. 
This is the underlying technicality and reason why I am so reluctant 
about having 'true' in "P is true" or P=true, i.e., the term 'P' equals 
the term 'true' to be the same 'true' in "|-P is TRUE", i.e., (it is 
TRUE that) there is a proof sequence making "P=true". So Church's 'o' 
covers more than just that 'bool'. Or I should say that the truth of a 
Boolean term (a term of type 'bool') is not to be confused with the 
truth of a sentence, i.e., we may have 'P(x)=true' but 'true' in 
'Ex.P(x) is true' is not the same 'true', in my opinion, and according 
to our way of working with constructed types.

So this is perhaps my approach to "translation ... from a formalism to 

The main reason, I should say, for us doing this is to manage extensions 
from binary logic to uncertainty and many-valued logic. The term 
functor/monad construction remains the same across levels, but when it 
appears over different categories like various monoidal closed 
subcategories (one being the "fuzzy" Goguen category), we have some 
interesting things. E.g. we get "fuzzy lambda-calculus" for free, and we 
arrive at a deeper notion of "fuzzy description logic", so relational 
structures in general are extended "canonically" to many-valuedness. 
Many-valued ontology comes into play, and so on, and so forth.

Another way of understanding why we do this is to look at 
Goguen-Meseguer logical structures, where the term functor is abstract 
since in the general models there are no signatures. What we say is that 
any logic, however abstractly presented, must have an underlying 
signature. And when it does, there are terms, that then appear in 
sentences, that in turn appear in proofs, and so on, "latively". This is 
why we say "logic is lative". Logic is a structure, where we add more, 
piece by piece. First the signature giving terms. This is a monad. Then 
sentences based on terms. This is just a functor, otherwise, if the 
sentence functor is a monad, then those sentences are terms. Then we 
have conglomerates of sentences. Then we have proof rules on those 
conglomerates. And then we have "truth" of such things at the end of 
that rope, which is something very different from 'true' in the 
underlying signature at the starting point.

Yes, this is a syntactical view, and it indeed differs very much from 
having symbols and putting them one by one in sequence, using natural 
language formulations to say which sequences are allowed, which not.

This is our approach to automatization/deautomatization, and indeed, 
that "lativity" is important.



On 2019-10-08 07:53, José Manuel Rodríguez Caballero wrote:
> Dear FOM members,
>   I would like to begin with a quotation from [1]:
>> Proof assistants must be capable of performing lengthy deductions
>> automatically. But more expressive formalisms are more difficult to
>> automate. Even
>> at the most basic level, technical features of constructive type
>> theories interfere with
>> automation.
> This principle may suggest a one-dimensional polarization in the
> formalization of mathematics as follows: on one extreme we may have
> the most expressive formalization of mathematics and on the other
> extreme we may have the formalization of mathematics which is better
> for automation. I would like to distinguish two processes:
> - mathematical automatization: A translation of a theory from a
> formalism to another one which is more efficient for automation, but
> less expressive.
> - mathematical deautomatization: A translation of a theory from a
> formalism to another one which is more expressive, but less efficient
> for automation.
> On the one hand, mathematical automation is useful in order to reduce
> some of the tasks (not all) of a mathematician to a microprocessor. On
> the other hand, the importance of mathematical deautomatization is
> more mysterious. So, I would like to ask for examples of
> deautomatization in the history of mathematics, i.e., the renunciation
> of efficient automation in order to obtain more expressive power.
> Sincerely yours,
> José M.
> References:
> [1] Paulson LC. Formalising Mathematics In Simple Type Theory. arXiv
> preprint arXiv:1804.07860. 2018 Apr 20.
> https://arxiv.org/abs/1804.07860
> _______________________________________________
> FOM mailing list
> FOM at cs.nyu.edu
> https://cs.nyu.edu/mailman/listinfo/fom

More information about the FOM mailing list