[FOM] Complexity of (Universal) Turing Machines
Steve Gardner
Steven.Gardner at arts.monash.edu.au
Mon Apr 28 21:45:59 EDT 2008
Andrej Bauer wrote:
> If your motivation is a machine which can "generate" all _finite_
> strings then you should not be looking at universal Turing machines,
> which can also enumerate all computable infinite strings, and they
> can diverge (not give an answer), etc. How do you account for these
> possibilities in your idea of looking at a UTM as a probability
> distribution? Furthermore, the set of inputs for which a UTM stops and
> gives a finite output is not decidable.
>
> Perhaps you should be looking at machines that enumerate all finite
> strings. Of course there will be some pretty simple machines that do
> that, e.g., the one computing the identity function. Not so interesting,
> but perhaps still better.
>
>
Thanks for your reply, Andrej.
The interest in UTMs is as models of receivers of explanation messages
in the MML framework. An explanation message is a two part message,
where the first part of the message states a hypothesis H (optimally
encoded according to a prior probability distribution over the space of
possible hypotheses), and the second part of the message optimally
encodes the data under the assumption that the H is true. If a UTM taken
as the receiver of such a message, then first part of the message,
stating H, is an interpreter which turns the UTM into a TM (possibly
also universal) which will will be able to decode the optimally encoded
data in the second part (and, if H really is true, optimally encode any
future data). Prior to receiving the explanation message, we can regard
the UTM as making implicit assertions about the prior probabilities of
the possible hypotheses: for some theory H, the prior probability p(H)
implicitly asserted by the UTM is 2^-L, where L is the Kolmogorov
complexity of H. After receiving the first part of the explanation
message, the UTM will have different expectations.
Now to respond to your point about divergence: if the first part of the
message turns the UTM into a TM which fails to halt when given the
encoded data as input, this is equivalent to asserting that the correct
theory is not a computable consequence of the data. I think scientists
assume in their practice that the correct theory will always be a
computable consequence of the data (if not from data we have now, then
at least from data we could in principle obtain), but even if that
assumption turned out to be wrong, I think a working scientist would
have to go on as if it were right. Theories with non-computable
consequences will not be accepted as explanations of the data. (Apparent
counter-examples to this precept, such as quantum mechanics, are
discussed in Wallace 2005, pp.128-129.)
Of course it's true that strictly speaking, the UTM is *too* powerful to
serve as a realistic model of the receiver of an explanation. A UTM has
an unbounded work tape, which serves as an unbounded memory capable of
storing any finite number of binary digits. In the real worl, no human
or computer has access to such an unbounded memory, nor has the luxury
of unbounded time needed to use it. (Wallace, 2005, p.127).
Regards,
Steve
--
Steve Gardner
School of Philosophy and Bioethics
Faculty of Arts, Monash University
Steven.Gardner at arts.monash.edu.au
Office: W903A/11 ph: (613) 9902 0017
-- Anything that can't go wrong, will go wrong.
-------------- next part --------------
A non-text attachment was scrubbed...
Name: Steven_Gardner.vcf
Type: text/x-vcard
Size: 242 bytes
Desc: not available
Url : /pipermail/fom/attachments/20080428/59080b8b/Steven_Gardner.vcf
More information about the FOM
mailing list