[FOM] The Lucas-Penrose Thesis
robblin at thetip.org
Fri Sep 29 17:10:53 EDT 2006
On Sep 29, 2006, at 8:54 AM, Eray Ozkural wrote:
> Looking at this specific case of whether humans are better at
> solving the halting problem than the machines, the answer is definitely
> negative. For those who are not intuitively unsatisfied with this way
> looking at the Godel-Lucas-Penrose thesis, I invite them to review some
> of the harder halting problem instances popular in the literature. A
> lines of code can grind our brains to a halt. This is because our
> brains can
> hold only so much information.
The claim is NOT that any given human mind can derive the Godel
sentence for any given arithmetical system.
The claim is that it is impossible (logically) for a given machine to
determine the truth of (any of) its Godel sentences and that it is not
(logically) impossible for humans to do decide the godel sentence for
that given machine.
Call the machine-specification in question M.
We know there are undecidable formulas for M. Call one of the G.
We know M does not prove G.
Call the human-mind-system H. If "we know that M does not prove G"
then H can know that M does not prove G. And H can decide G.
(This is putting aside, for the moment, concerns of size and time
limitations, one assumes, as in common mathematical practice, that the
human works in "logical time" - e.g. without constraints about dying
before he's solved the problem, etc., this same caveat is applied
throughout 'classical' mathematics. It doesn't make me particularly
happy, but it is the assumption.)
Whether this is because humans can make up new axioms ad hoc, I'm not
However, if there were a machine that could make up new axioms ad hoc,
I'm not sure it would count as a turing machine. I certainly can't see
anyway of implementing it in a way that wouldn't also make 0=1 a
An inconsistent machine will DEFINITELY prove 0=1. A human, even if
they're inconsistent on some matters, may not prove 0=1. They could do
this, for instance, by stubbornly refusing to recognize proofs that
they ought to believe that given the other things they believe.
I think the conclusion I draw is somewhat less strong than "Humans are
not machines", I draw the conclusion that humans are not LOGICAL
Machines (in the sense of modern-mathetmatico-logic).
More information about the FOM