[FOM] The Lucas-Penrose Fallacy

hendrik@topoi.pooq.com hendrik at topoi.pooq.com
Thu Oct 12 13:44:41 EDT 2006


On Thu, Oct 12, 2006 at 02:32:11PM +0200, laureano luna wrote:
> 
> Any sound Turing machine is logically incapable to
> solve the halting problem when fed with its own code.

If it only has to solve the halting problem when fed its
own code, there is an easy solution.  Let the TM ignore
its input, and just say "yes".  This TM terminates for
any input, and correctly asserts that it terminates.

What's impossible is for a Turing machine to solve the
halting problem for *all* Turing machines.

> But what how could we establish for a sound human the
> (corresponding) logical impossibility to solve the
> halting problem for some particular machine when faced
> to it, without assuming beforehand that he is a
> machine?

Without knowing what sound humans do, it is pretty well
impossible to prove anything about what they can do.
And logical impossibility of something is a property
of the logic.  It may be possible or impossible to
solve a problem without it being a logical impossibility.

> Any consistent formal system of sufficient strength is
> logically unable to prove its own consistency. But how
> could we establish for a consistent human the
> (corresponding)logical impossibility to know the
> consistency of some particular consistent system when
> having it in front of him, again without begging the
> question at issue?
> 
> The problem is that for all we presently know about
> human knowledge and behavior we have no means to
> produce such proofs.
> 
> I'll try to show why the possibility of those proofs
> seems implausible to me: algorithms or formal systems
> can, by definition, be completely objectified for
> humans because they are well defined finite objects;
> this is so 'in principle', i. e. setting aside
> possible physical limitations; that is why it seems
> that humans can always in principle know anything
> about any given algorithm or formal system; in the
> contrary algorithms are not always such possible
> 'objects' for themselves, on pain of circularity or
> paradox. This seems to make a difference.
> 
> Human acts of thinking are not in turn always possible
> objects for themselves since this would lead to
> circularity and other problems (no intentional act is
> its own intentional object, in phenomenological
> terms). So we have:
> 
> 1. Machines are not always possible 'objects' for
> themselves.
> 2. Machines are always possible objects for human
> thinking.
> 3. Human thinking is not always a possible object for
> itself.

So you have shown that humans cannot think exhaustively
about themselves because we do not have enough
knowledge to use as a basis for thought.
But we can think about machines because we have
perfect information about the machine's mechanism.

> 
> We can conclude twice against mechanism. 
> 
We can conclude that we know more about machines than
about human beings.  This seems to be a limitation
on our knowledge rather than a limitation on the thing
known (or not).  In particular, I would say your argument
only shows we lack enough information to show
constructively that human being are machines.

-- hendrik


More information about the FOM mailing list