[FOM] The Lucas-Penrose Thesis vs The Turing Thesis
steven at semeiosis.org
Wed Oct 4 13:53:57 EDT 2006
On Oct 2, 2006, at 1:26 PM, John McCarthy wrote:
> I don't think consciousness presents any logical problems provided the
> thinker doesn't purport to be able to guarantee answers to "will I
> ever" and "are my thoughts consistent". Humans can observe some of
> our mental state, but computer programs can be much more conscious
> than that. My 1995 Machine Intelligence 15 article "Making robots
> conscious of their mental states" is available as
> It goes into much detail about consciousness of the past, intentions,
> hopes, knowledge and non-knowledge.
I am not at all sure of what Professor McCarthy means by "computer
programs can be much more conscious than that." I have heard him say
this before and it is a little too mysterious.
I doubt that experience has magnitude. But I suspect McCarthy really
means that the types of things that machines deal with are ultimately
capable of being broader and also more persistent than the range of
types and persistence available to us. With this I can agree.
However, we disagree about the likelihood that this can be achieved
with current mechanics.
McCarthy's underlying assumption is that intelligent behavior can be
equated to "thinking." This is Alan Turing's anthropomorphic view and
McCarthy's logical introspection, discussed in the paper he
references, does not change this view substantively since it goes to
A lot depends upon our definition of the term "thinking." There are
two phenomenal aspects of thinking - there is the process of thinking
(the effective parts of which we attempt to embody in our notions of
logic) and there is experience. While they are not distinct, as this
analysis notoriously suggests, simply equating the two eliminates
experience without providing an explanation for its presence - it is
an absurd reduction - and it prevents us from considering what role
experience may play in the mechanics.
The process of thinking, for me, relies upon my experience. It is the
objective techniques only that are captured by our existing symbolic
systems - and these are limited today by their engineering in
existing computer systems. Simply put, biological systems provide an
a priori system wide integration that provides a basis for mechanisms
of differentiation in the construction of logic. In my own work I
provide a role for experience in such engineering. Briefly, by
introducing an inert primitive against which natural selection
Knowledge in modern computer systems is assembled by integration in
logical construction - which provides the difficulty of integration
between non-local branches - whereas knowledge in biological systems
is engineered by differentiation against the entire embodiment. This
view is consistent with the expectations of Rudolf Carnap and others
in the Vienna Circle, and it is the basis of Roger Penrose's review
of non-local phenomena in Quantum Mechanics.
The right and remarkable result of modern symbolic systems is that we
can indeed embody machines with aspects of our intelligence - but
this does not allow us to infer that such machines experience. This
brings us back to the merit of taking experience seriously as a
phenomenon in the world.
Dr. Steven Ericsson-Zenith
Institute for Advanced Science & Engineering
More information about the FOM