FOM: more juicy CA refs.. free online!!
vznuri at earthlink.net
Tue Aug 6 16:23:07 EDT 2002
after musing on it a bit, I realized my new enthusiasm for
the subject of CAs is related to several foremost aspects that
would appeal to just about any theoretical researcher:
(a) recently it really looks to me like they are a more "natural" model
of computation than turing machines, and perhaps any other model,
& much work in "turing machine theory" may eventually be translated
into them instead for a more natural & concise version of many properties.
(b) I agree with wolfram that they may be a primary means & vehicle for
unifying mathematics & algorithms, and possibly, even physics,
into an all-encompassing theory.
(c) they relate closely to some research in automated theorem proving
I did in the mid 90s which could be seen as rather obscure but takes
on new importance. I am optimistic that they may also eventually be
the basis of powerful new automated theorem proving systems.
overall it looks to me that even after about half a century of inquiry of
varying degrees of intensity, the surface has only been scratched on
the subject of CAs & their interpenetration to existing mathematical &
algorithmic theory. but the initial groundwork may be currently
converging into a "critical mass" crossfertilization with existing theory.
hey, lets face it-- even though one has to take
a very somber & serious tone in e.g. grant applications, Phd defenses, etc.
sometimes research can be really exciting!!
below are some other really key & compelling aspects & dimensions
of this new fusion that JB & I just turned up &
I missed covering in the prior msg. note that all these
are ***freely*** downloadable papers available immediately off
citeseer for everyones intellectual pleasure.
1st, let me quickly outline the problem with defining computational
universality in CAs with an example scenario that just occured to me
since the prior msg. consider a CA with infinite input
and output with evolution that seems to "look" completely
the problem of defining computational
universality in CAs involves defining what constitutes
a computation in the CA. but what if a CA evolution that
"looks" random at every point in the process can actually be
utilized to be a universal TM? I suspect
a result along these lines is quite possible.
(this also suggests a link of CAs to
kolmogorov-chaitin complexity!! which sure enough can be found
in the literature.)
so informally I would call this the "embedding problem" of CAs
which we have been focusing on here. how can
we determine if useful computation can be "embedded" in the rules &
evolution of the CA? its easily an undecidable question in general. but we
have cases such as the cook-wolfram proof that show that there are some
very complicated, nontrivial, unexpected, and surprising "embeddings".
keep in mind, according to wolfram, cook was about to
give up on CA110 rules at one point because it looked hopeless & impossible
to "squeeze" universality out of them!! (by the way, while I have
questioned wolframs technical contribution to the proof, which as
far as I can tell was not significant or perhaps nonexistent,
I must fairly credit him with apparently financially supporting
cook, i.e. **funding** a freewheeling & supportive
academic and research environment behind the proof's creation.)
"embedding" is also obviously related to wolframs "principle of
computational equivalence" outlined in NKS. arguably that is basically a
conjecture or thesis that "embedding" is far more common than
is realized, that many computational systems permit embedding. so one
could even look at it as an extension of the church-turing thesis.
church-turing tells us what computation is somehow "possible", the principle
of computational equivalence asserts it is somehow "common".
I suspect there will be much interesting research in the future
into "embedding" and many ways of showing how to embed TMs into
now, on to the references.
The Game of Life: universality revisited (1998)
B. Durand, Zs. Roka
here we have a reference that starts to inquire into the
general concept/theme of computational "embedding" in CAs.
john baldwin just turned this reference up. outstanding!! its a pity
it took us so long to stumble across this one
given how extremely relevant it is to
our current inquiry. the authors look at conway's
Life universality proof (apparently even finding a bug in one of
his constructions) and also survey the universality results on CAs.
they note the fuzziness & variations
in the universality definitions across the literature (where most
authors are rather nonchalant or even oblivious about rigor), & how various
definitions can exclude certain constructions. they pound out more rigorous
definitions but conclude that a fully definitive statement may not be
available. they look at periodicity in the inputs, input/output
encoding and decoding, etcetera. they also tie in the relationship
of boolean circuit computations within CAs. many nice & revealing
diagrams, very readable & intuitive. its such a relief to find out that
at least two other researchers were feeling the same cognitive
dissonance we were w.r.t the existing CA computational universality
Kolmogorov Complexity and Cellular Automata Classification (1999)
J.-C. Dubacq, B. Durand, E. Formenti
here we have a key reference linking up kolmogorov-chaitin complexity to
CAs. the authors come up with a parameter based on the CA transition
table that measures the "randomness" of evolution & compare it
with another parameter from the literature, langton's lambda value,
which, roughly, measures the density of non-quiescent states
in the state table.
Compression Depth and the Behavior of Cellular Automata (1997)
James I. Lathrop
the author finds that lempel ziv compression analysis applied
to CA output can help in the wolfram classification scheme &
is also related to the langton lambda parameter. note that
compression algorithms like lempel ziv can be considered one
possible empirical estimate of kolmogorov-chaitin complexity.
Revisiting the Edge of Chaos: Evolving Cellular
Automata to Perform Computations (1993)
Melanie Mitchell, Peter T. Hraber, and James P. Crutchfield
this paper is also in wolframs journal.
this is an example of the highly empirical approach that
can be taken w.r.t. CAs. the authors reexamine langton's
lambda parameter. what really struck me as I looked this
over is this: phase transitions in NP complete problems &
computational complexity theory have grown to be an absolutely
massive and explosive research subfield in CS in the
decade of the 90s, with the end still not yet in sight.
but here we have langton talking about a phase transition like phenomena
in CAs as early as 1990 via his lambda parameter. prescient!!
there is a remarkable story about langton in the book
"complexity" by waldrop.. it goes into rather intense detail,
even how his Phd proposal on "the edge of chaos" was initially rejected
by his advisors due to it being too nebulous!! langton
is a very colorful figure & an early participant at the
santa fe institute, and arguably one of the founders
of modern complexity theory, although from what I can tell he has
not stayed active in research, choosing to going into industry.
The 3x+1 Problem: a Quasi Cellular Automaton
Thomas Cloney, Eric Goles, Gerard Y Vichniac
wolframs complexity theory journal, vol 3, #2
alas, I dont know of a free download for this one, but
I cant resist citing this, because it comes so
close to my own research direction since mid90s,
even though some consider the collatz problem a bit of a
quibbling curiousity. its basically the only
early reference I can find so far that can be taken to relate CA behavior
to theorem proving, to justify my earlier claim above. let me sketch
the collatz conjecture itself can be seen as merely a toy problem
because it doesnt seem to have been linked to any deeper
or more significant areas of mathematics per se,
but the general goal of attacking an arbitrary conjecture
via some algorithmic system (e.g. CAs) relates deeply to
automated theorem proving. if one were able to prove the
collatz conjecture via CA analysis, it may not be so interesting
in what it reveals about the collatz conjecture, but maybe more
that the same technique could/might be generalized to
attack any other arbitrary mathematical conjecture, including
very difficult ones.
so in a sense, the marginally-significant
collatz conjecture serves as the training wheels for the bicycle.
that is exactly the research plan/agenda and algorithmic framework that
I have been seeking to advance & develop for over half a decade.. with
no coconspirators in the project.. "yet". =) (but anyway its
invigorating for me to see some of the CA research & wolfram papers come
so close to it, & am optimistic it will naturally flow further
in the direction.)
also note that (coincidentally enough) conway looked
at systems similar to the 3x+1 problem
and was able to prove they led to undecidable problems in general.
which could be taken as circumstantial evidence that even a problem
as simply constructed as 3x+1 is undecidable.
my thanks to JB for his own enthusiasm in these subjects & our email
exchanges without which I may not be making many of these inquiries!!
More information about the FOM