[FOM] "algorithm" and the Church-Turing Thesis

Patrik Eklund peklund at cs.umu.se
Wed Aug 24 04:11:35 EDT 2016


Two aspects may be worth mentioning in the crossover between 
foundations, respectively, of mathematics and computer science.

1. The definition of terms is not always sufficiently precise

Computer science, as well as mathematical logic, accept to define the 
set of terms basically in natural language as follows:

*) a variable is a term,
**) if omega is an n-ary operator and t1, ..., tn are terms, then 
omega(t1,...,tn) is also a term,
***) nothing else is a term

Of course, this is the natural language representation that can be made 
mathematically precise, but it never is, and maybe because there is 
seemingly no need for it. Clearly, recursion is hidden in that informal 
definition. In the many-sorted case, making it precise is 
non-trivial(http://www.sciencedirect.com/science/article/pii/S0165011413000997). 
The advantage of making it precise using categories, functors and monads 
is that it then enables 1) to compose monads (using Beck's "distributive 
laws") so that we can extend the notion of terms, 2) to observe that we 
can construct terms over more elaborate categories than just the 
category of sets. For instance, doing so, opens up new ways of 
understanding variables, still maintaining the basic idea of a 
substitution. The paper behind the link above gives examples, and our 
survey paper 
(http://www.sciencedirect.com/science/article/pii/S0165011415003152) 
further points out the use and need of underlying monoidal categories. 
These papers also have bearing on the FOM discussion of fuzzy and 
many-valuedness, as I pointed out in one posting.

The definition of a Turing machine hides recursion very effectively, and 
never recovers it. That is the reason why it is not possible to say 
"what algorithms ARE", as Martin Davies pointed out. Needless to say, 
the definition of a Turing machine also contains a bit of natural 
language imprecision.

Clearly, CT is "broadly accepted" but it makes no sense to say it is 
"true" because respective definitions of computability uses different 
languages. Also, Turing's definition does not cover the transfinite 
case. Note that the inductive limit in category theory is also not 
transfinite. In computer science, "transfinite terms" (whatever they 
are!) make no sense, because of Turing and von Neumann.

Turing, in his 1936 paper, speaks about "a direct appeal to intuition", 
and it is close to being the intuition of a human understanding how a 
computation can be automized. However, a human understands more, as all 
"comprehensible functions" are not "computable functions". This is the 
dilemma of AI as well.

When we start to believe that seemingly precise definitions can safely 
be formulated in semi-natural language, we open the door to potential 
trouble. Lambda terms are good examples. The similarly natural 
definition of lambda terms is in computer science seen as very pretty, 
but in fact it introduces undesirable lambda terms, which unfortunately 
in computer science is seen as part of that prettiness, because you need 
tricks (algorithms) to fix it. The Fuzzy Terms paper suggests to use a 
'three-level' signature to handle this formally. The idea of levels of 
signatures was first presented in 2012 
(http://www.glioc.com/files_glioc/ModernEyesOnLambdaCalculus.pdf). It 
also gives a three-level view of Schoenfinkel's, Curry's and Church's 
constructions.

Church in 1940 called 'lambda' an "informal symbol", and we tried to 
formalize the role of that symbol. Essentially what we say is the 
following:

lambda: I can abstract you!
operator: I can do that myself, thank you very much!

The 'type' sort on level two is intuitively the "type of all types", but 
its semantics cannot be treated by traditional universal algebra. Note 
also that whereas there are both "free monoids" as well as "free 
categories", there is no such thing as a "free monoidal category". 
Operators on level two are basically type constructors, so their models 
would need to be functors. This in turn means that that the model of 
'type' would have to be the object set of a monoidal category. The HoTT 
community is totally out of this kind of thinking.

2. The distinction between term and sentence is not well acknowledged

In first order logic, a predicate symbol can be seen as a operator 
producing a term, but when placing quantifier symbols before the 
predicate symbol, we get something that clearly is not a term, i.e. P 
and E.P are in "different worlds", but this is not properly 
acknowledged. The situation for lambda terms is similar. Saying that 
terms are also lambda terms creates trouble as mentioned above.

Church (1940) said "We purposely refrain from making more definite the 
nature of the types o and iota, the formal theory admitting of a variety 
of interpretations in this regard." Note that our 'type' is basically 
Church's iota, and his o is the intuition for "sort of sentences". But 
we have to be careful.

Not making that distinction also boils down not clearly saying which 
comes first, syntax or semantics. For terms, it's more like syntax 
first, but for predicates we kind of say that anything that potentially 
can be modelled by a relation is a potential predicate, so we may throw 
in predicates as we go along in logic. Typical situations arise in the 
case of metastatements in connection with Gödel's Incompleteness 
Theorem. Kleene asked "What is the nature of the predicate R(x,Y)?" in 
connected with R(x,Y) representing "Y is a proof of (x)". Proceeding to 
write EY.R(x,Y) equiv |- A(x) is really keeping everything "in the same 
bag" and ignoring any distinctions. This is why I sometimes speak of 
Gödel's Incompleteness Paradox rather than saying Theorem, since then we 
would be interested to find out what is peculiar rather than only 
admiring what is accepted to be right.

In the categorical view, terms are constructed by monads (so that 
composition of substitution works as expected) whereas sentences are 
constructed by functors. Proof rules and provability needs similar 
strictness, but has not been done.

---

My view is that a foundational debate of these things, in order to be 
fruitful, needs to allow us to go back in the historical pathway of 
logic and foundations.

---

Best,

Patrik



On 2016-08-23 20:03, Yiannis N. Moschovakis wrote:
> Prompted by Vardi's post, I thought I might point to the slides
> from my remarks in the panel discussion which motivated his
> article and the paper (mostly on the Church-Turing Thesis) that
> I wrote after the meeting: they are posted at
> http://www.math.ucla.edu/~ynm/lectures/2011dlmps.pdf and
> http://www.math.ucla.edu/~ynm/papers/ynm2014.pdf
> 
> I think that the problem of "defining" algorithms is important for
> the foundations of the theory of computation and has not received
> the attention it deserves. The paper cited above gives
> (incomplete) references to some of the work that has been done on
> it, mostly by Yuri Gurevich (and his collaborators) and myself.
> 
> Yiannis
> 
> 
> On Mon, Aug 22, 2016 at 4:00 PM, Richard Heck <richard_heck at brown.edu>
> wrote:
> 
>> On 08/22/2016 04:58 PM, Yiannis N. Moschovakis wrote:
>> 
>> Martin is absolutely right in saying that "the Church-Turing
>> Thesis says nothing about what algorithms ARE. It is about what
>> algorithms can and can not DO", a not-so-subtle point which is
>> often misunderstood.
>> 
>> However, the historical point he makes is not entirely correct: it
>> is true that Turing, Kleene and (as far as I know) Post never
>> mentioned algorithms in their fundamental papers on the
>> Church-Turing Thesis....
>> 
>> 
>> Turing writes in the 1936 paper, at the beginning of section 9:
>> 
>> No attempt has yet been made to show that the "computable" numbers
>> include all numbers which would naturally be regarded as computable. 
>> All
>> arguments which can be given are bound to be, fundamentally, appeals
>> to intuition, and for this reason rather unsatisfactory 
>> mathematically.
>> The real question at issue is "What are the possible processes which 
>> can be
>> carried out in computing a number?"
>> 
>> Turing goes on to give his famous arguments for the Church-Turing 
>> thesis,
>> one of which consists largely of a philosophical analysis of the 
>> notion of
>> a "possible process[] which can be carried out in computing a number".
>> Surely this question is naturally interpreted as: What are algorithms?
>> 
>> Richard Heck
>> 
>> 
>> _______________________________________________
>> FOM mailing list
>> FOM at cs.nyu.edu
>> http://www.cs.nyu.edu/mailman/listinfo/fom
>> 
>> 
> 
> _______________________________________________
> FOM mailing list
> FOM at cs.nyu.edu
> http://www.cs.nyu.edu/mailman/listinfo/fom


More information about the FOM mailing list