[FOM] 2,3 Turing machine proof controversy
cg at xnet.co.nz
Thu Nov 8 00:44:09 EST 2007
First, let me start with a quick summary of my understanding of what has
transpired recently with respect to the discussion of Alex Smith's proof
that the machine at the centre of the Wolfram 2,3 Turing Machine Research
Prize is in fact universal.
Alex Smith has claimed that his encoding scheme, and method of construction
of the infinite initial tape configuration in his proof does not gift
universality to the machine. Vaughan Pratt has argued otherwise, suggesting
that Alex Smith's proof is essentially allowing the equivalent of unbounded
invocations of a Linear Bounded Automaton (LBA) with different and
increasingly longer input each time, and that this could be providing the
"universality" claimed for the 2,3 machine itself. Alex Smith replied to
this, saying that he believed the objection contained contradictory
assumptions and is therefore invalid. And then today he has commented
further in an attempt to get some clarity.
Links to these earlier postings:
1. Vaughan Pratt's email:
2. Alex Smith's first response:
3. Alex Smith's more recent comments:
So now I'd like to suggest a simple thought experiment that might help shed
some light on this topic. My motivation for this is to try to understand how
Alex Smith's extension to the (extensively studied and well-understood)
"classical" single tape Turing Machine can be justified (or not), especially
with respect to how practicable the underlying machine model is, and how
universality might be coming from something other than the 2,3 machine
The original Turing Machine was intended to be an abstract model of a
practicable computing machine - more or less something that we could
actually build and use if desired. It is probably more commonplace for the
tape of this machine to be simply described as infinite. However we can
equally think of it as always finite, so long as it is long enough to hold
all the non-blank data initially, and with the proviso that this machine
also has an endless supply of finite sized lengths of blank tape and a
mechanism to use those to extend the main tape whenever required. Thinking
of the machine in this way doesn't change what it can compute, but it does
make it a little easier to justify the model as "practicable". Of course
there is still a need for a potentially endless supply of raw material to
create blank tape but it seems we can't do much about that!
Let us now imagine an augmented version of Turing's original model by
specifically including a tape extension unit (TEU) of some kind or another,
which we can consider as a separate component within the larger system. The
idea is that the new model can still be easily understood as "practicable"
in the same sense as the original model but also supports a more direct
mental picture of what Alex Smith's proof requires. In fact, we might as
well allow for two such TEUs, the first for extensions on the left end and
the other for extensions on the right end of the main tape. In this new
system, there is still a mechanism for extending the tape to the left or
right when required, but instead of using any old piece of blank tape, it
always uses the next (and also always finite) length of tape generated by
either the left or right TEU as appropriate, and attaches that instead (and
in the correct orientation).
1. What kind of machines can those TEUs be, what can they use for input, how
can they be "programmed", etc., without being in danger of gifting
"universality" to the system as a whole?
2. What is the weakest type of "general purpose" TEU that suffices for Alex
Smith's encoding scheme?
3. Is this a useful way of looking at the problem, and have similar models
to this been studied in detail previously?
We might imagine that each TEU is some kind of LBA and the input provided to
it on each invocation (each time a new piece of tape is required) is a
"carbon copy" of the initial (finite) input tape for the main system plus
copies of any previous tape extensions that this TEU may have generated (all
still in their "virgin state" as it were) and joined in the natural way,
thus giving it a finite section of the infinite initial tape as described in
Alex Smith's proof. It seems likely to me that a machine with this kind of
TEU, or a relatively simple variation of it, would be sufficient for to
support the tape construction used in Alex Smith's proof, or minor variant
However, if a truly arbitrary LBA is allowed (i.e. it is running a "program"
of our own choice) then the whole system can provide universal behaviour (as
I understood Vaughan Pratt to be pointing out earlier), as long as the logic
in the main "CPU" is able to recognise that the information on the tape
coming from the LBA means it is time for the overall system to halt!
Essentially the bulk of the computation can be done in the LBA itself. We
can arrange things so that sooner or later it will have a long enough tape
to halt (if the problem if computable) and some simple signal can be used to
signal the "main CPU" of that fact (via the tape).
Alex Smith's proof describes the construction of the infinite initial
configuration as being done in a "non-universal" way. He certainly hasn't
used the equivalent of a completely arbitrary LBA, but instead tries to make
his equivalent of our TEU machine as simple as possible. However, it still
needs to be a LBA or something very like it as far as I can see. As he
himself notes, the possible inputs for each construction phase (each
invocation of our TEU) includes integers that can grow arbitrarily large,
and then we have to do various reasonably complex arithmetic on these values
and then based on the results of that, construct the required pattern (also
eventually growing arbitrarily large) on the tape.
A more reasonable restriction might be to require that the LBA used in the
TEU(s) must use the same finite state machine as that used by the "main
CPU". Now, everything that happens in the overall system is being processed
by several instances of the same type of "CPU", operating in a kind of
pipeline with data flowing only towards the main CPU from the TEU(s). But
really this is not doing much beyond splitting the workload.
In any case, this is not what Alex Smith's proof does. Instead, all the
computations of the TEU(s) in the model described by our "thought
experiment" have been delegated directly to the person encoding the input.
In effect, this person has to be the surrogate TEU in our augmented Turing
machine. As such, he or she has to be available essentially throughout the
computation ("forever" if it never halts), or alternatively, assuming a
truly infinite tape that must be completely initialised beforehand, then
that person (or encoder) has to do an infinite number of computations before
the machine can even start. On the other hand, with classical Turing machine
models the required computation to set up the initial tape is always finite
(by definition), and after that no more interaction is required until the
machine halts (assuming it does).
I would argue that this strongly suggests that the encoding and construction
in Alex Smith's proof is playing a role in the computation, and it is not
all being done by the 2,3 machine that is at the centre of this discussion.
We might also note that if we try to collapse this kind of machine (with
separate TEU(s)) back to the "classic" Turing Machine model, then we may not
have enough states and symbols available to do the required tape
initialisation. In other words, by shifting some of the work that a
classical Turing machine might otherwise need to do to the TEU, we've
simplified the task for the "main CPU". Furthermore, if you don't include an
explicit TEU in your system, then the saving in states and symbols required
by the main CPU is also effectively hidden (if you're interested in counting
More information about the FOM