[FOM] The Lucas-Penrose Thesis
asyropoulos at gmail.com
Sat Oct 7 15:05:40 EDT 2006
2006/10/6, Eray Ozkural <examachine at gmail.com>:
> To know its programming is exactly to have access to this
> sequence of symbols. It implies nothing else. However, this is
To have access to its symbols does not imply that a program does understand
the meaning we have assigned to them. Take for example a C compiler, which
is a program that understands C programs. In most cases, it cannot find
the real reason why some particular syntax error has been detected and so
it produces really unintelligent error messages (even when it is written in the
C programming language!). The compiler dully follows commands and does
not make any intelligent guess.
> indeed a remarkable feat, because humans do not have their
> designs in front of them.
But we are not a sequence of symbols... Even if we assume that we are
just symbol processing devices, we are devices not just symbols.
> > A computer program cannot actually decide whether some other program with
> > some input will halt or not and you expect to have programs that will
> > correct themselves?
> Exactly. Human debuggers cannot solve the halting problem either,
> yet they can debug. On the other hand, like human debuggers computer
> debuggers can solve subsets of the halting problem, should that be required.
Not really. They operate more or less like oracle machines, where a human person
has the role of the oracle. In other words, debuggers are tools used
to spot errors. They do not auto-correct computer programs. After all, there are
many cases where it is not clear what the original intension of the
was and this makes things even more difficult.
> > Could you please provide me with a simple example? It would be really
> > interesting to have a concete example.
> Yes, I can give a concrete example. Some examples are given in
> the "Godel Machine" paper by Schmidhuber:
On can read on this page the following:
Gödel machine (or `Goedel machine' but not `Godel machine') rewrites
any part of its own code as soon as it has found a proof that the
rewrite is *useful*...
And what makes something useful? In addition, how can the machine
decide it has found
any proof? More specifically, the machine continuously rewrites its
code which makes
it difficult to decide whether something is a proof or not. I am not
really convinced that this
is an intelligent machine.
366, 28th October Str.
GR-671 00 Xanthi, GREECE
More information about the FOM