[FOM] 332: Preferred Model
Vaughan Pratt
pratt at cs.stanford.edu
Fri Jan 23 01:33:36 EST 2009
Harvey Friedman wrote:
> i. The elements of NBS in P are all in P#.
> ii. Suppose (x,y) is in P and z is in P#. Let w be the result of
> replacing exactly one occurrence of x as a substring in z, by y. Then w
> is in P#.
Various flavors of rewriting systems of this degree of simplicity
constitute the bulk of the machines studied in the 1200 pages of
Wolfram's ANKS (not counting the 60-page index with close to 10,000
entries). While some of them are as sophisticated as continuous
differential equations, the one Wolfram starts with in Chapter 2 amounts
to the following adaptation of FPM = Friedman's Preferred Model.
(i) (restriction on NBS in P, in the FPM terminology) P contains exactly
one element of NBS, meaning a single string (as opposed to pair of
strings, the other half of P).
(ii) (restriction on NBS^2 in P) For all pairs (x,y) in P, |x| = |y| =
3, with the further constraint that y can only differ from x in the
middle bit. In other words (x,y) has the form abc --> adc where a,b,c,d
are bits. Furthermore this part of P (the pairs of strings, aka rewrite
rules) consists of exactly eight pairs, one for each 3-bit pattern abc.
(iii) (variant) w is the result of applying *all* applicable rules
synchronously (as opposed to just one as in your machine), namely n-2
such applications when the initial string is of length n. (Cellular
automata in place of Post production systems.)
Since there are 8 possible values for abc (call them 000 through 111),
for each of which d may be 0 or 1, there are 256 possible sets of pairs
(x,y) (call them 00000000 through 11111111 - Wolfram calls each such
8-bit string a rule). The rule 00100011 says that d is 1 for patterns
000, 001, and 101, and 0 for the other 5; Wolfram calls this rule 35
since 00100011 in binary is 35 in decimal.
Wolfram exhibits the behavior of all 256 P's when started on a single
string of bits with only the middle bit 1, on pp. 55-56 of Chapter 3.
In Wolfram's analysis there are 88 "fundamentally inequivalent" such
rules. About two-thirds produce periodic behavior. The rest produce
growing patterns, mostly well-structured, but ten produce ostensibly
random patterns, of which rules 30, 45, and 73 are representative. (The
above contains everything you need to verify this for yourself, which
you can do with a few lines of code.)
If the starting string of P is of length n, P# can have at most 2^n
strings. However if it is of the form ...0000100000..., i.e. infinite
in both directions with exactly one nonzero bit, some very interesting
P#'s can emerge. Every string in every P# will be ultimately constant
to the left and the right (the same constant on both sides). This
suggests cutting off the constant parts in order to make each P# a set
of finite strings. All 256 such P#'s will be not only recursive but in
DTIME(n) (linear deterministic time), since we need only run the
relevant rule for n/2 steps to decide membership in P# of a string of
length n.
But what if we pay no attention to constancy when truncating, but simply
truncate arbitrarily? That is, constitute P# as the set of
finite-length substrings of the infinite strings produced by the rule
starting from the infinite string ...00000100000... The resulting P#
will be of low computational complexity for all but the ten chaotic
rules (and conceivably some of the others, whose apparent periodicity
could conceivably be masking something deeper, however unlikely this may
seem at first glance).
QUESTION: What is the computability of the ten ostensibly chaotic P#'s
when defined by arbitrary truncation as above?
All P#'s so defined are easily seen to be r.e.: just run the rule until
you encounter the substring.
If any of them are complete (in the sense of r.e.-complete) then we have
an 8-bit presentation of an r.e. set that is not only not recursive but
as bad as it gets for r.e. sets.
Does anyone know if this question has been asked before? It would be
surprising if not since it's so natural, and moreover is intimately
bound up with the Wolfram 2,3 Turing Machine Research Prize, yet I've
never heard of it.
I conjecture that FPM (the model Harvey prefers) has no complete P# of
"natural" size as low as 8.
HF> The size of a presentation is the sum of all of the lengths of the
HF> bit strings that appear. P# is the r.e. set associated with P.
This is not a natural measure as it does not charge for the delimiters.
Either use self-terminating bit strings or equivalently charge the
"size" of an n-bit bit string as n + log n + log log n + ... (all logs
base 2 and rounded up to an integer). But even without the log n etc. I
still seriously doubt there could be a size-8 complete P# under FPM.
HF> 1. What (approximately) is the least "presentation size" of an r.e.
HF> set that is not recursive?
HF> 2. What (approximately) is the least "presentation size" of an r.e.
HF> set that is of intermediate Turing degree?
The difference between 1 and 2 would have been clear if 1 had asked
about complete sets. As it stands I guess what Harvey means by
"intermediate" is < complete, as distinct from 1 which I take to mean
<= complete. I tend to believe Wolfram's Principle of Computational
Equivalence, PCE, which amounts to the conjecture that almost all
non-recursive sets arising from these simple systems are complete (the
"almost" tends to get dropped in press accounts of PCE, which
Friedberg-Muchnik contradicts in the absence of a bound on "simple").
The odds that the answer to 1 for FPM is not a complete set seem
vanishingly small to me. But I can't fault Harvey for playing it safe
there, Wolfram's PCE is not a theorem.
Vaughan Pratt
More information about the FOM
mailing list