[FOM] The Lucas-Penrose Fallacy

laureano luna laureanoluna at yahoo.es
Thu Oct 12 08:32:11 EDT 2006


On 10 Oct 2006 Bob Hadley wrote:

>However, both Chihara and myself pointed out more
>than 15 years ago
>that in this debate, is necessary to distinguish
>between a formal model 
>of 
>the machine *simpliciter* (i.e. the machine without
>its input being 
>supplied) and a formal model of the machine with its
>input supplied.
>When a Turing machine is supplied with an input set,
>a Larger formal
>system is required to model the machine than when we
>are considering
>just the machine simpliciter.

>Both Lucas and Penrose, in their reduction
>arguments, assume they they 
>are supplied with an input set which is a
>description of some formal
>model that is purported to be equivalent to
>themselves.  Supposing each
>of them to be a machine, any reasoning they engage
>in about this
>"input set" would need to be modelled in a larger
>formal system than
>the system they have received as input.  Ergo, any
>success they 
>may obtain in "meta-proving" some godel-sentence for
>the system 
>supplied
>as input in no way establishes the conclusion of
>their putative
>reductio arguments.  

I do not think this says much against the
Lucas-Penrose argument. I still think this argument
renders mechanism implausible, even if not impossible.


Any sound Turing machine is logically incapable to
solve the halting problem when fed with its own code.
But what how could we establish for a sound human the
(corresponding) logical impossibility to solve the
halting problem for some particular machine when faced
to it, without assuming beforehand that he is a
machine?

Any consistent formal system of sufficient strength is
logically unable to prove its own consistency. But how
could we establish for a consistent human the
(corresponding)logical impossibility to know the
consistency of some particular consistent system when
having it in front of him, again without begging the
question at issue?

The problem is that for all we presently know about
human knowledge and behavior we have no means to
produce such proofs.

I'll try to show why the possibility of those proofs
seems implausible to me: algorithms or formal systems
can, by definition, be completely objectified for
humans because they are well defined finite objects;
this is so 'in principle', i. e. setting aside
possible physical limitations; that is why it seems
that humans can always in principle know anything
about any given algorithm or formal system; in the
contrary algorithms are not always such possible
'objects' for themselves, on pain of circularity or
paradox. This seems to make a difference.

Human acts of thinking are not in turn always possible
objects for themselves since this would lead to
circularity and other problems (no intentional act is
its own intentional object, in phenomenological
terms). So we have:

1. Machines are not always possible 'objects' for
themselves.
2. Machines are always possible objects for human
thinking.
3. Human thinking is not always a possible object for
itself.

We can conclude twice against mechanism. 

If, as I believe, intentionality (the semantic
dimension of human mind) has functional consequences
in humans, then, since algorithms can be functionally
entirely described in terms of pure syntax, algorithms
cannot be such humans.

Best regards,

Laureano Luna Cabañero.

   



		
______________________________________________ 
LLama Gratis a cualquier PC del Mundo. 
Llamadas a fijos y móviles desde 1 céntimo por minuto. 
http://es.voice.yahoo.com


More information about the FOM mailing list