[FOM] The rule of generalization in FOL
aa at tau.ac.il
Fri Sep 10 03:18:01 EDT 2004
On Thu, 9 Sep 2004, Neil Tennant wrote:
> If from premises X not involving the name 0 you have proved F(0), then you
> may infer (x)F(x), in which 0 no longer occurs.
> Indeed, for any name a:
> If from premises X not involving the name a you have proved F(a), then you
> may infer (x)F(x), in which a no longer occurs.
> Typographically, 0 has no special status; it is a name like any other.
Of course typographically 0 has no special status. What I meant is the
different way that this symbol is used as a constant in mathematical
texts and the way your constants are used. So let me elaborate this time:
Whenever FOL is applied in mathematics (or in other areas) the first
step is to choose the first-order language to be used. Recall that
there is no single first-order language, but a big class of first-order
languages. Now a first-order language is specified by giving its
signature, which is the list of its declared predicate symbols (together with
their arities), function symbols (together with their arities) and
*constants*. A structure for such a language consists of a nonempty
domain, together with appropriate interpretations of the symbols in
its signature. Of course, the interpretation in a structure for a langauge L
of a given constant is a *fixed* element of that domain. *Sentences*
of L are distinguished from other formulas by *having their truth-value
fully determined once a structure is selected*.
As an example, take L to be the standard first-order language of PA.
It includes a constant that we usually denote by "0", but we may of course
use any other symbol instead, like "1" or "%". What matters is that this
symbol is an official constant symbol of L, and gets a fixed
value in any structure for L. In the standard structure for L
the interpretation of 0 is the number zero. So when I used "0"
in my original message I meant it only as a standard example of "a declared
constant of the language under consideration".
Now in your system, if I understand it correctly, every language comes
equipped with an infinite stock of "constants". Still, I am convinced
that somehow a distinction beween an official (or declared) constant
of the language and other "constants" is kept. Otherwise one is not
able to refer to *the* standard structure for the language of Peano's
arithmetics, but only to an infinite set of "standard structures",
depending on the interpretations given to the various "constants". I am
convinced also that somehow you distinguish between *pure* sentences of the
language and "sentences" that contains non-declared "constants". Otherwise,
e.g., you will not even be able to *formulate* Tarski's theorem on
the undefinabity of arithmetical truth.
Now, I see no practical difference
between non-declared constants and (free) variables, or between
impure sentences and formulas containing free variables. Is there any?
It seems to me that all this boils down to a difference in terminology.
This become rather clear from the end of your message:
> The difference between a name and a variable is that the latter, but not
> the former, can occur in a sentence only if bound by a quantifier. One has
> the usual inductive definition of "phi is a formula with such-and-such
> occurrences of variables free"; and a sentence is a formula with no free
> occurrences of variables.
> The *proof system*, however, allows only sentences to occur in proofs.
Since anyhow your proof system allows only sentences to occur in proofs, we
may assume that all formulas in your languages are sentences. Hence
every atomic term in your formulas/sentences is either a variable (bound
by some quantifier) or a "constant", and no symbol may function both
as a variable and as a constant. Well, exactly this type of languages
is used in many texts on proof theory, for example: Takeuti's
classical "Proof Theory". The only difference is a differnce in terminology:
what you call "constant" is called there "free variable" and what you
call "variable" is called there "bound variable". To clarify: "bound"
and "free" are there *not* properties that a variable may or may
not have in a formula, but something absolute: every first-order
language comes equipped with two disjoint infinite sets of variables:
the set of "free variables" and the set of "bound variables". The
latter can occur in a sentence only if bound by a quantifier, the former
cannot be bound by a quantifier (so in Takeuti's book there
are officially 3 types of atomic quasi-terms: constants, free
variables, and bound variables). This approach has certain technical
advantages. For example: the cut-elimination theorem may fail for
sequents in which the same variable occurs both free and bound. Thus
the classically (and intuitionistically) valid sequent
\forall x\forall y (p(x) & q(y)) => p(y)
has no cut-free proof (although it is easily derivable using a
cut on \forall x.p(x) ). This problem is avoided by Takeuti's approach.
The approach has of course disadva$ntages as well (thus Takeuta himself
has to introduce at a certain point the concept of quasi-formula, in
which a "bound variable" may occur free). Anyhow, my main point
is that as far as I can tell (unfortunately, I dont have your book),
there is no difference whatsever between what Takeuti is doing and
what you are doing, except for terminology. Personally, I prefer
Takeuti's terminology: If you can substitute terms for something,
and that something may have different values in the same structure,
then this something is to me a variable, not a constant. But
I am not going to quarrel about terminology. However, I do think
that real philosophical questions cannot be solved by a change in terminology.
So if Hodges has a real philosophical problem (I dont think so), then
your system is not going to solve it. Moreover: the fact that only
sentences are allowed in proofs at your system has little
significance for me if what you call a sentence is what others
(including me) call a formula.
suspect) to Ta
More information about the FOM