[FOM] The Lucas Penrose Thesis

Eray Ozkural examachine at gmail.com
Mon Oct 2 19:01:09 EDT 2006


On 10/2/06, praatika at mappi.helsinki.fi <praatika at mappi.helsinki.fi> wrote:
> It is certainly true that a purely syntactic machine cannot mean anything.

I disagree with this certainty. A completely disembodied (i.e. lacks the
"bodily" causal interaction with the environment) computer can take as input
sentences that are then converted into logical form by a semantic analyzer.
And then use these logical forms (i.e. computer programs or sentences in
FOL) into good use. It can also be conceived that the program designs a
language in which it can define the meaning (i.e. references and what not)
completely. However, at any rate, we should be able to agree on that the
meaning of a natural language term as ordinarily understood by us is
nothing more than an aggregate of mental states (i.e. states of perception)
that are in our memory.

Thus, we can make the computer watch video tapes that tell the meaning
of these word, and the computer will be able to associate the sensory
data with the word, by using machine learning algorithms.

About the Chinese Room "thought experiment" however, one can also
conceive of a mathematical version of the experiment. It may be a
good way to show why the said experiment does not hold up, and in
fact, be quite on-topic for FOM list since it would be a discussion about
what constitutes meaning of a mathematical proposition.

Best,

-- 
Eray Ozkural, PhD candidate.  Comp. Sci. Dept., Bilkent University, Ankara
ai-philosophy: http://groups.yahoo.com/group/ai-philosophy


More information about the FOM mailing list