[FOM] The Lucas-Penrose Thesis

Eray Ozkural examachine at gmail.com
Sat Oct 7 20:51:44 EDT 2006


On 10/7/06, Apostolos Syropoulos <asyropoulos at gmail.com> wrote:
> 2006/10/6, Eray Ozkural <examachine at gmail.com>:
> > To know its programming is exactly to have access to this
> > sequence of symbols. It implies nothing else. However, this is
>
> To have access to its symbols does not imply that a program does understand
> the meaning we have assigned to them.

Some quick comments.

I think there is a high probability that you are contradicting with the
usual notion of "semantics" that is employed in formal programming
language design. I have snipped the other part, but I think it ought
to be fairly obvious that understanding a language does not mean
reading the mind of a person who utters nonsense.

On the other hand, one can conceive of analyzers that will correct
simple errors (i.e. fault tolerant). This issue has also been pursued
theoretically (CAs, graph automata, etc.), but I think ultimately
beyond the scope of the present discussion. It would, for instance,
be quite impossible for a programmer to guess the intention
(not "intension") of a programmer who has written a code that
simply follows the wrong algorithm (say for instance, that reverses
the order of numbers instead of sorting them, which was the
"intention").

On the other hand, for many kinds of particular codes (i.e. short of Turing
universality), it may be possible to achieve good fault tolerance.

Also, human programmers are in no way "oracles". Such machines
have been clearly depicted as "hypothetical" by Turing himself who
well knew that humans cannot solve the halting problem. I do think
that people who tacitly assume the contrary have not done a
substantial amount of programming and/or miss well known non-trivial
halting problem instances.

Best Regards,

-- 
Eray Ozkural, PhD candidate.  Comp. Sci. Dept., Bilkent University, Ankara
http://www.cs.bilkent.edu.tr/~erayo  Malfunct: http://myspace.com/malfunct
ai-philosophy: http://groups.yahoo.com/group/ai-philosophy


More information about the FOM mailing list