[FOM] The Lucas-Penrose Thesis
Bob Hadley
hadley at cs.sfu.ca
Sat Sep 30 13:45:40 EDT 2006
Readers who have been following the discussion of Lucas and
Penrose may well find a recent paper of mine to be useful.
Fragments of the abstract and introduction are copied below,
and the paper is entitled:
"Consistency, Turing Computability, and Godel's First
Incompleteness Theorem."
The paper is currently under review at a journal, but I'd be
happy to supply copies upon individual request. Just send a
request to me at hadley at sfu.ca
Cheerio,
Bob Hadley
Simon Fraser University
~~~~~~~~~~~~~~~ Excerpts follow ~~~~~~~~~~~~~
It is well understood and appreciated that Godel's
Incompleteness Theorems apply to sufficiently strong, formal
deductive systems. In particular, the theorems apply to systems
which are adequate for conventional number theory. Less well
known is that there exist algorithms which can be applied to such
a system to generate a godel-sentence for that system.
Although the generation of a sentence is not equivalent to
proving its truth, the present paper argues that the existence of
these algorithms, when conjoined with Godel's results and
accepted theorems of recursion theory, does provide the basis
for an apparent paradox. The difficulty arises when such
an algorithm is embedded within a computer program of sufficient
arithmetic power. The required computer program (an AI system)
is described herein, and the paradox is derived. A solution to
the paradox is proposed, which, it is argued, illuminates the
truth status of axioms in formal models of programs and Turing
machines.
Also well known is that Lucas (1961), Penrose (1989, 1994), and
others have presented arguments, grounded upon Godel's first
incompleteness theorem, to show that human cognition cannot be
completely modeled by any computer program or Turing machine.
The present paper seeks to show that particular assumptions and
strategies employed by Lucas and Penrose lend themselves to the
production of an apparently serious paradox. In effect,
certain of their assumptions and modes of reasoning can be
adapted to ``prove'' that not only are humans not machines, but
machines are not machines! Of course, such reasoning must
contain some flaw. It what follows I develop the paradox and
offer a solution to it. (I will be using `paradox' in
its dictionary sense, which does not imply unsolvability.)
The development of the solution provides insight into the nature
of axioms within formal models of Turing machines and computer
programs. In particular, the truth status of such axioms is shown
to be problematic. In addition, other subtleties relevant to
computability and recursion theory are explored in the process.
More information about the FOM
mailing list