[FOM] Complexity of (Universal) Turing Machines
hzenilc at cmu.edu
Wed Apr 23 02:22:13 EDT 2008
On Mon, Apr 21, 2008 at 8:30 PM, Steve Gardner <Steven.Gardner at arts.monash.edu.au> wrote:
> for the purposes of Bayesian inference, choice of a UTM can be
> regarded as equivalent to choice of a prior probability function.
You, and perhaps others FOM readers, might be interested in reading the first version of a paper that we recently submitted:
"Towards a stable definition of Kolmogorov-Chaitin complexity".
Although information content is invariant up to an additive constant, the range of possible additive constants applicable to programming languages is so large that in practice it plays a major role in the actual evaluation of K(s), the Kolmogorov-Chaitin complexity of a string s. Some attempts have been made to arrive at a framework stable enough for a concrete definition of K, independent of any constant under a programming language, by appealing to the "naturalness" of the language in question. The aim of this paper is to present an approach to overcome the problem by looking at a set of models of computation converging in output probability distribution such that that "naturalness" can be inferred, thereby providing a framework for a stable definition of K under the set of convergent models of computation.
More information about the FOM