[FOM] The Lucas-Penrose Thesis
examachine at gmail.com
Fri Sep 29 11:54:19 EDT 2006
On 9/29/06, Keith Brian Johnson <joyfuloctopus at yahoo.com> wrote:
> What guarantee do we have that a given human mind will recognize the
> Godel sentence of a given machine as true? Mightn't there be some
> machines whose Godel sentences a particular human mind wouldn't
> recognize as true?
Yes, there might. However, to think further why arguments from incompleteness
cannot establish that the human mind is super-mechanical, I propose
that we look into Chaitin's stronger versions of incompleteness, which
tell us that in no formal axiomatic system (with a finite set of axioms)
we can derive as many bits of Omega that we like. Or in other words,
finite machines cannot ascertain every halting predicate, which we know
from Turing's result.
In practice, we know for a fact that humans cannot so easily find if a given
program halts. Some of these problems have turned into mathematical
challenges, and it may be the case that there is no guarantee for any
given individual to find them out (and more strongly, that there is no guarantee
that _any_ individual will).
One point needs to be made. Humans can learn, but they learn in
no way different than machines do. Thus, they can assume more (and
extend their axiom sets) but so can machines.
Looking at this specific case of whether humans are better at
solving the halting problem than the machines, the answer is definitely
negative. For those who are not intuitively unsatisfied with this way of
looking at the Godel-Lucas-Penrose thesis, I invite them to review some
of the harder halting problem instances popular in the literature. A few
lines of code can grind our brains to a halt. This is because our brains can
hold only so much information.
Eray Ozkural, PhD candidate. Comp. Sci. Dept., Bilkent University, Ankara
http://www.cs.bilkent.edu.tr/~erayo Malfunct: http://myspace.com/malfunct
More information about the FOM