[FOM] 202:Proof(?) of Church's Thesis

Harvey Friedman friedman at math.ohio-state.edu
Mon Jan 12 14:41:50 EST 2004


There is a considerable amount of literature on Church's Thesis, some of
which is recent. I certainly have not absorbed this literature, and so I
offer this approach with the expectation that others who have thought about
this and/or written about this will explain their ideas here on the FOM.

If what I say here turns out to be, to various degrees, "known", then I
offer this as a clear and concise and friendly and public exposition.

We will propose a condition on deterministic computation which appears to us
to be capable of being made compelling to a degree which corresponds roughly
to the length and care of the accompanying prose. However, we limit the
length and care of any accompanying prose here.

As is usual when discussing the classical Church's Thesis and classical
variants, we assume that computation proceeds along a discrete time axis
numbered 0,1,2,3,... . We also assume that at each stage n >= 0 of
computation, there is a well defined finite "system" that provides the
"global state" at that stage. Also that these systems are "exact". I.e.,
that there is no vagueness or randomness about what the finite structure is
at any stage of computation.

>From now on, we fix an "algorithm". We want to prove, under appropriate
principles, that the "action" of this algorithm is effective in the usual
standard precise sense of recursion theory.

Computation is initialized by a choice of any finite system M0, which is
"appropriate" for the given algorithm. This system M0 represents the initial
global state. 

These systems will be taken to be of the form M = (D,R1,...,Rk), where

i) D is a finite set (of locations);
ii) k >= 1;
iii) the R's are relations on D of various arities >=  1 - i.e.,
relationships between locations.

The type of M is the sequence of arities n1,...,nk, of the R's.

As computation proceeds according to the given algorithm, the sequence of
global states is represented by the sequence of finite systems M0,M1,M2,...,
all of the same type. This common type is considered to be specified by the
given algorithm

Let us call an infinite sequence M0,M1,M2,... of finite systems a
*computation sequence* for the given algorithm if and only if it represents
the succession of global states during some computation that follows the
given algorithm. Thus the M's must all have the type specified by the given
algorithm. 

PREPRINCIPLE 1. Let M0,M1,... and N0,N1,... be two computation sequences.
Suppose Mi = Nj. Then Mi+1 = Nj+1.

We say "preprinciple" because we can object to this statement as follows.
The choice of actual locations is arbitrary. We may replay the same
computation. Since the computation is deterministic, it will proceed in the
same way. But the actual identity of the locations may be different.

One can go back and forth with such objections, so it is prudent to weaken
Preprinciple 1 as follows.

PRINCIPLE 1. Let M0,M1,... and N0,N1,... be two computation sequences for
the given algorithm. Suppose Mi and Nj are isomorphic. Then Mi+1 and Nj+1
are isomorphic.

Principle 1 does not capture something deeper.

The passage from Mi to Mi+1 is what is crucial in computation. Thus we must
be sensitive to the relationship between Mi and Mi+1.

In the passage from Mi to Mi+1, some locations may remain, some locations
may be removed, and some locations may be created.

PRINCIPLE 2. Let M0,M1,... and N0,N1,... be two computation sequences for
the given algorithm. Suppose Mi and Nj are isomorphic. Then (Mi,Mi+1) and
(Nj,Nj+1) are isomorphic.

We need to explain the notation (M,N) where M,N are finite systems.

Let M = (D,R1,...,Rm) and N = (E,S1,...,Sn). We define

(M,N) = (D union E,D,E,R1,...,Rm,S1,...,Sn)

where the domain is D union E, and D,E are viewed as unary relations.

Now let us back up and write down a particularly basic preprinciple.

PREPRINCIPLE 3. Let M be a finite system whose type is appropriate for the
given algorithm. There exists a computation sequence M = M0,M1,... for the
given algorithm.

We can make the usual objections concerning arbitrariness of locations. Thus
we restate this as follows.

PRINCIPLE 3. Let M be a finite system. There exists a computation sequence
M0,M1,... for the given algorithm, where M and M0 are isomorphic.

We now discuss how we interpret halting of computation in this treatment.
Halting occurs at the least i such that Mi = Mi+1. This is supported by the
following fact, which is an immediate consequence of Principle 2.

FACT. Let M0,M1,... be a computation sequence for the given algorithm which
halts at stage i. Then for all j >= i, Mj = Mi.

We can also propose to interpret the output of computation upon halting.
This can be taken to be simply Mi, where halting occurs at stage i. Under
this interpretation, the "act" of "producing" the output after halting is
considered part of the computation.

An objection can be made that even under this interpretation, systems are
not appropriate outputs, particularly systems of the same type as the global
states of the computation.

There are good ways around this allowing for a special output phase.
However, we choose to ignore this complicating point, and instead
concentrate on augmenting the above three Principles in order to prove the
following:

***the set of all isomorphism types of finite systems that lead to halting
under the given algorithm is recursively enumerable in the standard sense of
recursion theory.***

This is the essence of the matter, and such a proof can be easily molded
into proofs of the obvious related statements.

The crucial additional principle that we need must tell us that the passage
from any global state M appropriate for the given algorithm, to the next
global state M' according to the given algorithm (up to isomorphism), must
be "basic". 

Of course, if we don't somehow take advantage of the transition system
(M,M'), then we are right back to where we started, and we run around in
circles. 

Fix a finite system M of the type specified by the given algorithm. Think of
M as a global state appropriate from the given algorithm.

By Principle 2, there is a transition (M,M') which is unique up to
isomorphism; i.e., the combined system (M,M') is unique up to isomorphism.

Now imagine that the domain of M is enormous, but finite. Now M' is to be
created via the given algorithm from M in some "basic" way,
deterministically. Thus there is only one correct (M,M'), up to isomorphism,
according to the given algorithm

What can we say if we consider some (M,M'') which is wrong? I.e., (M,M'') is
not the correct (M,M') up to isomorphism?

There must be something about (M,M') that is at odds with the given
algorithm, that can be seen "locally", rather than just "globally". To
reinforce the point, we urged the reader to imagine that the domain of M is
enormous.

For our purposes, we do not need to go into an analysis of just what we mean
by "locally" here. This would lead us on a rather tricky path, with plenty
of complications, since we would be arguing that we have captured the
appropriate notion of "local".

Instead, we offer that no matter what "local" means here, we must have the
following principle.

There must be a fixed integer n, "present" in the given algorithm, such that
the following holds.

*there must be something wrong with (M,M'') that is detectable by looking at
at most n locations; i.e., at most n elements of dom(M) union dom(M'').*

We are now prepared to state the final principle.

PRINCIPLE 4. There exists a positive integer n (depending on the given
algorithm) such that the following holds. Let M0,M1,... be any computation
sequence for the given algorithm. Let (Mi,M) be such that M is of the same
type as M0 and (Mi,M) is not isomorphic to (Mi,Mi+1). Then there exists a
subset E of dom(Mi) union dom(M), with at most n elements, such that for no
M' do we have 

i) (Mi,M') and (Mi,M) are identical restricted to E;
ii) (Mi,M') and (Mi,Mi+1) are isomorphic.

THEOREM. Assume Principles 1-4. The set of all isomorphism types of finite
systems M that lead to halting under the given algorithm, is recursively
enumerable in the sense of recursion theory. Furthermore, the transition
function is appropriately recursive in the sense of recursion theory, and
also the output function is appropriately partial recursive in the sense of
recursion theory.

The proof of this Theorem is straightforward. The contribution here, if any,
is the recognition that the core technical idea (straightforward) can be
adapted to provide an approach to Church's Thesis. Of course, I looked for
such a core technical idea, not knowing whether it would be straightforward
or not in advance.

NOTE: This approach suggests certain formal models of computation that have
probably not been studied, and which are very powerful. For such a study to
be of interest, an a priori upper bound needs to be placed on the growth of
locations as computation proceeds - perhaps even +1. Then one can study just
how minimal the types can be, as well as the error number n, so that the
usual complete recursive enumerability phenomena kicks in. I expect that the
inherent computation power is tremendous.

*********************************************

I use http://www.mathpreprints.com/math/Preprint/show/ for manuscripts with
proofs. Type Harvey Friedman in the window.
This is the 201st in a series of self contained numbered postings to
FOM covering a wide range of topics in f.o.m. The list of previous
numbered postings #1-149 can be found at
http://www.cs.nyu.edu/pipermail/fom/2003-May/006563.html  in the FOM
archives, 5/8/03 8:46AM. Previous ones counting from #150 are:

150:Finite obstruction/statistics  8:55AM  6/1/02
151:Finite forms by bounding  4:35AM  6/5/02
152:sin  10:35PM  6/8/02
153:Large cardinals as general algebra  1:21PM  6/17/02
154:Orderings on theories  5:28AM  6/25/02
155:A way out  8/13/02  6:56PM
156:Societies  8/13/02  6:56PM
157:Finite Societies  8/13/02  6:56PM
158:Sentential Reflection  3/31/03  12:17AM
159.Elemental Sentential Reflection  3/31/03  12:17AM
160.Similar Subclasses  3/31/03  12:17AM
161:Restrictions and Extensions  3/31/03  12:18AM
162:Two Quantifier Blocks  3/31/03  12:28PM
163:Ouch!  4/20/03  3:08AM
164:Foundations with (almost) no axioms, 4/22/0  5:31PM
165:Incompleteness Reformulated  4/29/03  1:42PM
166:Clean Godel Incompleteness  5/6/03  11:06AM
167:Incompleteness Reformulated/More  5/6/03  11:57AM
168:Incompleteness Reformulated/Again 5/8/03  12:30PM
169:New PA Independence  5:11PM  8:35PM
170:New Borel Independence  5/18/03  11:53PM
171:Coordinate Free Borel Statements  5/22/03  2:27PM
172:Ordered Fields/Countable DST/PD/Large Cardinals  5/34/03  1:55AM
173:Borel/DST/PD  5/25/03  2:11AM
174:Directly Honest Second Incompleteness  6/3/03  1:39PM
175:Maximal Principle/Hilbert's Program  6/8/03  11:59PM
176:Count Arithmetic  6/10/03  8:54AM
177:Strict Reverse Mathematics 1  6/10/03  8:27PM
178:Diophantine Shift Sequences  6/14/03  6:34PM
179:Polynomial Shift Sequences/Correction  6/15/03  2:24PM
180:Provable Functions of PA  6/16/03  12:42AM
181:Strict Reverse Mathematics 2:06/19/03  2:06AM
182:Ideas in Proof Checking 1  6/21/03 10:50PM
183:Ideas in Proof Checking 2  6/22/03  5:48PM
184:Ideas in Proof Checking 3  6/23/03  5:58PM
185:Ideas in Proof Checking 4  6/25/03  3:25AM
186:Grand Unification 1  7/2/03  10:39AM
187:Grand Unification 2 - saving human lives 7/2/03 10:39AM
188:Applications of Hilbert's 10-th 7/6/03  4:43AM
189:Some Model theoretic Pi-0-1 statements  9/25/03  11:04AM
190:Diagrammatic BRT 10/6/03  8:36PM
191:Boolean Roots 10/7/03  11:03 AM
192:Order Invariant Statement 10/27/03 10:05AM
193:Piecewise Linear Statement  11/2/03  4:42PM
194:PL Statement/clarification  11/2/03  8:10PM
195:The axiom of choice  11/3/03  1:11PM
196:Quantifier complexity in set theory  11/6/03  3:18AM
197:PL and primes 11/12/03  7:46AM
198:Strong Thematic Propositions 12/18/03 10:54AM
199:Radical Polynomial Behavior Theorems
200:Advances in Sentential Reflection 12/22/03 11:17PM
201:Algebraic Treatment of First Order Notions 1/11/04 11:26PM

Harvey Friedman

 













More information about the FOM mailing list