FOM: Re: Chaitin

Raatikainen Panu A K Praatikainen at elo.helsinki.fi
Sat Mar 24 11:44:36 EST 2001


Joe Shipman wrote:

> In defense of Chaitin, I have always found his approach by far the
> easiest way of establishing incompleteness theorems, and his
> philosophical insight that the strength of theories is ultimately
> dependent on their algorithmic information content is important.

I strongly disagree. The easiest way is to use the notion of truth 
and show by diagonal argument that provable does not exhaust 
true. (see e.g. Smullyan's book on incompleteness). Indeed, a 
rigorous proof of Chaitin's theorem requires one to arithmetize both 
Turing machines and the syntax of the theory in question (Gödel's 
proof requires only the latter) and move back and forth between 
these two codings (no wonder so many people have got lost).

Further, the claim that "the strength of theories is ultimately
dependent on their algorithmic information content is important" is 
simply false, as Shipman's own comments later show when he 
admits that "Some very strong theories seem to have much simpler 
axiomatizations than much weaker ones" 

****

Charlie Silver wrote that Rudy Rucker, for one,
considers Chaitin's Theorem to be of greater philosophical interest 
than Gödel's Theorem.   He says that Chaitin's Theorem gives us 
more information than Gödel's. 

In fact, Chaitin's Theorem gives less information, in the sense that 
it holds only for theories that are, not only consistent (or 1-
consistent) as in Gödel's proof, but also sound for the sentences 
"K(n) > m". 

******

Joe Shipman also wrote:

"We need a better way to measure (an upper bound on) the
algorithmic complexity of a theory.
         The most straightforward way to do this is to start with the 
predicate calculus as a computational base, and define the 
complexity of a theory to be the length of the shortest 
axiomatization, converting non-finitely-axiomatizable theories into 
finitely axiomatizable conservative extensions ones by introducing 
new predicates to Skolemize axiom schemes.  (Consider GB 
versus ZF.)  But there may well be ways to represent the size of a 
theory involving other computational bases that are superior for 
relating "size" of a theory to logical strength.
PA and ZFC both seem to have the property that they are "simpler" 
(in some inexact sense) than any logically stronger consistent 
theories. I'd like to see someone either dispute this assertion or 
suggest other examples of the phenomenon."

RE: I agree, exept that one should not call it algorithmic 
complexity any more - it is a different notion. I've had some ideas 
to this direction, and Harvey Friedman has had too (in think it was 
a year ago or so when we had some discussion on them here in 
FOM). 

- Panu Raatikainen




More information about the FOM mailing list