[FOM] The Lucas-Penrose Thesis

Eray Ozkural examachine at gmail.com
Fri Oct 6 14:19:16 EDT 2006


Dear Apostolos and the list,

On 10/5/06, Apostolos Syropoulos <asyropoulos at gmail.com> wrote:
> And how exactly a computer program can know its programming? A program
> exists once some computer program enters symbols, which foillow a certain
> grammer, into a computer file. As such it has no self-awareness or any other
> property: it is just a sequence of symbols.

To know its programming is exactly to have access to this
sequence of symbols. It implies nothing else. However, this is
indeed a remarkable feat, because humans do not have their
designs in front of them.

Thus, the computer can read its program as it reads any other
memory location such as input.

> > 2) It can keep a full trace of execution and then
> > change/debug its programming. Using the trace it can
> > perfectly recreate previous mental states
>
> A computer program cannot actually decide whether some other program with
> some input will halt or not and you expect to have programs that will
> correct themselves?

Exactly. Human debuggers cannot solve the halting problem either,
yet they can debug. On the other hand, like human debuggers computer
debuggers can solve subsets of the halting problem, should that be required.

> Could you please provide me with a simple example? It would be really
> interesting to have a concete example.

Yes, I can give a concrete example. Some examples are given in
the "Godel Machine" paper by Schmidhuber:
http://www.idsia.ch/~juergen/goedelmachine.html

Suppose that a program consists of a reaction protocol and a
reflective program that tries to improve the whole program, which run on a
time-sharing basis. The reflective program can adjust the computational
load of the reaction protocol according to the requirements of the
environment, so for instance if self-reflection does not help in a simple
environment (i.e. it proves that it will not), it can give it no time,
wasting no resources.

If this example seems like a limit case, I can try another.
To give an example from Star Trek, for instance the robot can turn
off its "emotion chip" if it decides that should be done. Humans cannot
modify their programming so easily and fundamentally, however.

With regards to real-life debugging, suppose that a control program
that the robot uses malfunctions in a novel case. The computer then
can proceed to fix the subroutine using ordinary debugging/programming
methods that human programmers also use, for example by tracing
the execution of the program, proving invariants, and seeing if code segments
work as expected, and if they do not by performing syntactic transformations
on the code segments in an educated fashion.

> > 3) It can rewrite itself from scratch if it feels like.
>
> In other words a computer virus is more conscious than a human! Note, however,
> that self-reproducing programs are completely dumb. They appear to be smart,
> but this is not the case.

However, self-reproducing programs are not necessarily dumb.

In particular, program search style universal problem solvers are expected
to recreate their search histories.

The ability to rewrite itself may seem insignificant at first, but it
may in fact
lead to a faster kind of (cognitive) evolution. The computer can
bootstrap itself
by inventing new basic algorithms that are smarter than the ones
written by its creators (us).

> > 4) It can extend its mind, for instance by forming new
> > perception systems that can explore another sensory modality.
>
> Programs do not have mind. Maybe one can simulate mental states with
> programs, but that's all.

This seems to be an assumption on your behalf. Since I equate
all intelligent agents with minds, I can speak like that. Thus, in
one particular (philosophical) position, the simulacra is a mind,
as well. At any rate, my remarks are independent of philosophical
theory, which is often superfluous. It may even be dangerous to try to
base science on philosophical speculations.

> > 5) Turn on/off subsystems at will, precisely manage
> > computational power given to processes.
> >
> > That is, it can be self-aware at the level of its programming.
>
> Could you please give me an example of a self-aware program?

Of course. The "Godel Machine" paper presents an example of a
self-aware program, as indicated in the first example that I gave.
There is no empirical proof that Godel Machine-like systems will
achieve strong AI, but I view it as a good theoretical model that
researchers can develop upon.

Best Regards,

-- 
Eray Ozkural, PhD candidate.  Comp. Sci. Dept., Bilkent University, Ankara
http://www.cs.bilkent.edu.tr/~erayo  Malfunct: http://myspace.com/malfunct
ai-philosophy: http://groups.yahoo.com/group/ai-philosophy


More information about the FOM mailing list