FOM: more on Con(ZFC) and semantic grasp
Neil Tennant
neilt at mercutio.cohums.ohio-state.edu
Mon Aug 24 10:38:18 EDT 1998
In an earlier posting I wrote:
> It is common, in the theory of meaning, to appeal to the notion of
> being able, *in principle*, to do something/recognize
> something/carry out some procedure, etc., without regard to its
> feasibility (in terms of space- or time-requirements).
and Steve then asked
> Whose "theory of meaning" is that? Tarski's definition of
> satisfaction and truth? Certainly Tarski's definition is a landmark
> of foundational research, but it has its limitations. In particular,
> it's not very convincing when we come to the psychology of what it
> means to understand something. This psychology needs to take account
> of the way the human mind works, in particular the mind's need to
> conceptualize and simplify by boiling down to essentials.
Perhaps the two main *systematic* theories of meaning since the time
of Tarski are those of Donald Davidson and Michael Dummett. These
theories take their lead from Tarski, in identifying the meanings of
declarative sentences with their truth-conditions. The
truth-conditions would be spelled out by an appropriately recursive
theory of Tarski's kind, and the theory would be subject to Convention
T.
Davidson and Dummett each have their own special twists or emphases in
their respective approaches. For Davidson, it is important to embed
the theory of meaning (i.e. ascription of truth-conditions to
sentences [of arbitrary complexity and length]) in a theory of
so-called 'radical interpretation', which attempts simultaneously to
ascribe to speakers such beliefs and desires as help best to make
overall sense of what they say and do. For Dummett, it is important to
give an account of the central recognitional capacities whose exercise
by competent speakers manifests their grasp of the meanings of terms
(such as the logical operators). This is Dummett's way of trying to
make more precise the otherwise vague Wittgensteinian requirement that
"meaning is use", or that "linguistic communication is essentially a
public matter", or that "linguistic behaviour is essentially
observable." This Dummettian "manifestation requirement" is at the
heart of recent attempts to justify (something in the region of)
intuitionistic logic as the correct logic---that is, as the canon of
inference that exploits exactly such meanings of the logical operators
as can be made manifest in one's linguistic behaviour. (See chapters 6
and 7 of my book "The Taming of The True", for a critique of Dummett's
own method of argument based on the manifestation requirement, and for a
proposal as to how to repair that method of argument.)
What is important, however, for both these leading contemporary
theorists of meaning within the analytic tradition, is that their
theories can be taken to apply, in principle, to sentences of
unmanageable length, sentences with which no speaker could "feasibly"
deal. Their theories are intended to apply without any finite upper
limit being set to the imagined abilities, or capacities, or
"computing" resources, of the mind that is able to grasp their
meanings. That is why philosophers today speak of
"understanding-in-principle". For, the same principles would apply to
understanding a simple sentence as to understanding a very long
one. The idea is that, provided only that one understand the
constituent terms, and understand the way those terms are put together
to form the sentence, then one has an implicit
understanding-in-principle of the sentence itself.
Steve went on to say:
> Let me clarify. When I speak of "understanding", I'm not talking
> about some artificial construct of analytical philosophy. Rather I am
> talking about conceptual understanding, in the layman's sense.
> Conceptual simplicity is the key to this. Harvey once told me that,
> in testing the understandability of his independent statements, he
> sometimes uses what he calls "the corridor test": can you explain it
> to a mathematician while standing in the corridor outside the
> mathematician's office. This is the kind of understandability that I
> am referring to. Con(ZFC) fails the corridor test, unless the
> mathematician in question already knows quite a bit of mathematical
> logic and set theory.
I beg to differ, at least in a minor regard. Imagine you had Con(ZFC)
written on a ticker tape in primitive arithmetical notation, and could
let it scroll by on the wall of the corridor opposite your colleague's
door. He/she could easily check that it involved only zero, successor,
plus, times and logical operators. The only question remaining would
be: is it well-formed? So, imagine that you now had a huge
two-dimensional electronic display board with the sentence broken down
into its tree-parsed form, bits of which might look like:
:
&
/ \
P Q
: :
Your colleague would be able quickly to scan all the furcations to
check for well-formedness. The result would be confident conviction
that one had here a meaningful statement of first-order arithmetic.
No doubt your chairperson would look askance at budgetary requests for
the installation of such electronic boards, and the Athletics
Department wouldn't be very happy with your getting one before the new
football stadium had one. But, hey, this is a thought-experiment. And
the extra element of the narrative is hardly more implausible than the
prospect of a core-mathematical colleague being willing to stand for
any longer than one nanosecond on the threshold of their office,
indulging an inquiring mathematical logician with evidence of
intuitive grasp of some sneaky litle statement...
I concede that your colleague wouldn't have any inkling of what the
sentence on the ticker tape or the electronic board actually says; but
he/she could have the conviction that it nevertheless says something
potentially intelligible. He/she would not necessarily be able to make
intelligent conjectures as to its provability status; but he/she could
eventually recognize as proofs constructions that others might present
to them, whose conclusions establish what the provability status of
that long sentence is. So, in some important respect Con(ZFC) would
pass the corridor test.
The respect in which it would not pass the test is already implicit in
what I have said above. No doubt that is the respect that Steve and
Harvey would be most concerned to emphasize: upon seeing *that* one of
Harvey's sentences is meaningful, the mathematical colleague
immediately "has in mind" the meaning that it carries, and can begin
doing intelligent things with it, such as conjecturing that it is
true/false/independent of such-and-such axioms, etc. Here let me
emphasize that I agree with Harvey and Steve, and are on their side in
the matter.
But I would want to maintain, as a meaning theorist, that being able
to evince such intelligent "active" behaviour in response to the
sentence goes strictly beyond what is minimally required for a thinker
to display an understanding of a sentence. One needs to make a
contrast between might be called "active" and "passive" manifestations of
understanding. For example, I understand Goldbach's Conjecture, but I
can't actively display that understanding by doing anything
intelligent with it. The best I can do is say things like "Well, since
it has the form (x)F(x), I would regard it as true if you could find a
proof of F(n) (for arbitrary n), and I would regard it as false if you
could refute any instance F(t)." And I could iterate such weak
displays of my grasp by saying the appropriate things about F(n) and
F(t). The best that could be said of one who understands Goldbach's
Conjecture is that they should be able, in principle, to recognize a
proof of it as such if presented with one, and likewise for a
disproof. But it would be asking or demanding too much of them that
they should themselves be able to set about finding suh a proof or a
disproof in order to evince their grasp of the conjecture.
So the requirements for attribution of understanding, or semantic
grasp, are appropriately weak. Now in connection with Con(ZFC) I had
said of a potential understander that
> It would be wrong to demand of him/her a knowledge of numeralwise
> representability of recursive functions as a prerequisite for
> *semantic grasp*.
To this Steve replied
> Neil, I don't understand. How am I demanding a knowledge of
> numeralwise representability of recursive functions? Such knowledge
> seems irrelevant to a conceptual grasp of Con(ZFC).
Steve is correct that such knowledge would be irrelevant to a grasp of
Con(ZFC); but he *is* demanding a knowledge of numeralwise
representability of recursive functions in order to invest Con(ZFC)
with the meaning that trafficks in such notions as proof. For, one can
only "read into" the primitive notation of Con(ZFC) the claim that ZFC
is consistent by "seeing it" as saying "There is no proof, in ZFC, of
absurdity as conclusion." Unravelling this a bit, one has to see the
sentence as saying "There is no natural number that codes up a proof,
in ZFC, of absurdity as conclusion."
Uh-oh!--what is this notion of "coding up a proof"?! *Now* we need
recourse to the representability of recursive predicates such as the
predicate "x is (the G"odel number of) a proof of (the sentence whose
G"odel number is) y". Even if we do not appeal to representability in
general, we must at least appreciate the fact that this particular
numerical predicate embedded in Con(ZFC) is indeed numeralwise
representing the syntactic notion of proof. One has to be able to
back-and-forth between, on the one hand, the holding of the informal
relation of proof among syntactic objects, and, on the other hand, the
provability-in-arithmetic of certain formulae instantiated by numerals
for G"odel-numbers of those same syntactic objects. This is one big
chunk of advanced and ad-hoc theorizing that has to be digested as a
prerequisite to being able to "read into" Con(ZFC) the consistency
claim itself. Steve says this is a matter of "conceptual simplicity";
but I would point out that the price of such conceptual simplicity is
a great deal of hard theorizing needed to win through to that
conceptual vantage-point.
As for Steve's closing misgiving about Frege's anti-psychologism: I
can only recommend a thorough (re-?)reading of the Preface to the
first volume of his Grundgesetze der Arithmetik. I think also that
developments in AI will bear Frege out on this score. For one of the
likely developments will be artificial mind/brains that will
effortlessly process much, much more complicated syntactic objects
than we are currently equipped to handle. They will be genuine
*prosthetic extensions* of human intelligence, able to parse and
deduce with an exactitude and speed that will leave even our best
human practitioners breathless, if not downright miserable. (Imagine
it!: a zillion Harvey-like chips in one machine...) Yet such
artifical intelligences will have been constructed in accordance with
basic *principles* of symbol-manipulation that will have been
excogitated from our own case, and smoothly extrapolated by the more
advanced artificial intelligences. With no limit to the add-on memory
capacities that these intelligences might have, the distinction
between "what can be held at one time in one's mind" and what can be
competently semantically processed will disappear. And we will have
Frege to thank for the very possibility of being able to construct
such prosthetic extensions of meatware-bounded human intelligence.
For, by dismissing such limitations altogether, he paved the way to
our eventually transcending them. (Similar thanks would go to Chomsky,
for his competence/performance distinction in theoretical grammar, by
the way.)
Neil Tennant
More information about the FOM
mailing list