# On the fully rigorous formalization of deductive reasoning in mathematics

Patrik Eklund peklund at cs.umu.se
Tue May 31 03:30:41 EDT 2022

```
Let me add some categorical observation, and as related to my previous
posting.

First, terms (expressions) must not be confused with sentences. The
fundamental difference is that the term functor can be extended (or
should be extendable) to a monad so that we can apply composition of
substitutions. The sentence functor is a functor but not extendable to a
monad, i.e., we do have substitutions for sentences. We do have
substitutions for terms that appear within  sentences, but that's
another story.

If we need to work with "sets of sentences", we are essentially
composing the powerset functor with the sentences functor, i.e., if we
choose to work over the category of sets.

Now note the important thing about the sentence functor. The sentences
based on the sentence functor must be fixed BEFORE we can work with the
relation ":" on PX x X so that X':T makes sense. But now, X':T is NOT a
sentence, but something else.

Note also how "," in "X1,X2" is the union (not the co-product!) of
elements in PX.

Provision of categorical counterparts enables to look at logic in
various different ways. Some algebraic features of logical constructions
can be situated in the underlying category.

---

Some further detail:

If we use T for the term functor and S for sentence functor, and X is a
n object of variables. then we usually have something like (SoT)X for
the object of all sentences over all terms over all variables. As I
noted previously, when we add types, we have variables of different
types etc., so the constructions can look quite complicated in the case
of T and S. Note also how S as the product of identity functors id x id
gives an equational theory for terms, and S = id is propositional
calculus where the logical connectives are operators making p AND q etc
to be terms.

So, a term is typed, but not a sentence, unless in the special case of S
= id.

In equational, if we have the sentence (equation) "t1 = t2", the terms
t1 and t2 must have the same type, but the equation "t1 = t2" does not
have a type. So if we have an AND like in "t11 = t12 AND t21 = t22" we
mustn't confuse it with a boolean operator.

A sequence of equations

t11 = t12 ... Tn1 = Tn2

kind of involves an AND but we have to be careful not to confuse.

This sequence/set of equations appears nicely in a co-equalizer diagram
e.g. when we deal with unifiers (a la Rydeheard-Burstall).

---

More subtle than "," for combining sets of sentences is the silent
operator "Z |= C      C, W |= D" in proof rules. That space between "Z
|= C" and "C, W |= D" contains some operation making "Z |= C      C, W
|= D" become "non-typely typed" the same as "Z |= C" and "C, W |= D",
respectively. That silent operator also has properties, and it can be
categorically described. Indeed, proof rules are also categorical
objects and morphisms!

---

Best,

Patrik

On 2022-05-28 20:35, Tennant, Neil wrote:

> Let X be the set of axioms of any first-order mathematical theory.
>
> The lemmas, theorems, and corollaries in the theory all rest,
> ultimately, on members of X.
>
> If you are morally certain that X is consistent (equivalently: has a
> model), then your proofs of those lemmas, theorems, and corollaries
> make you certain of their truth in any model of X.
>
> I take it that this much is an agreed starting point among logicians
> and foundationalists.
>
> Now: the classification of proven results from X as lemmas, theorems,
> and corollaries is really a subjective matter. From a logical point of
> view, they are 'on a par'. They are all of them just proven
> consequences of X.
>
> I want to use the word "theorem", though, to describe any "terminus" of
> deductive reasoning from X. Call such a terminus T. I shall be looking
> at proofs of results ("sequents") of the form Y:T, where Y is a finite
> subset of X. Note that these are all single-conclusion sequents, even
> in the classical case.
>
> 'On the way' to T from Y there could be many an intermediate result.
> Let us call such results lemmas. Consider a particular lemma L within
> an overall proof of Y:T.
> It would feature thus:
>
> There would be a proof of a sequent Y1:L (Y1 a subset of X) and
> another proof of a sequent L,Y2:T (Y2 a subset of X).
> Together they would convince the mathematician that T has been
> established from Y1,Y2. That is, s/he would take the argument
> (or sequent) Y1,Y2:T to have been established.
>
> Suppose you are rigorously formalizing the proofs of all known results
> in any mathematical theory with axiom-set X. You are focusing on some
> particular theorem T. You would identify all the lemmas involved in
> proofs 'on the way' to T from axioms in X. This would result in a
> decomposition of the informally rigorous reasoning into 'internally
> lemma-free' chunks, establishing sequents of the form (for Y a finite
> subset of X)
>
> L1,...Ln,Y:L or (finally) L1,...Ln,Y:T
>
> By soundness of proof, every one of these sequents is truth-preserving.
> So one can say, respectively,
>
> X |= L and X |= T
>
> as appropriate. The double turnstile |= of logical consequence does
> actually satisfy cut:
>
> Z |= C      C, W |= D
> ________________
> Z,W |= D
>
> But there is no need to apply the Rule of Cut within any sequent proof!
>
> The aforementioned identification of lemmas can be very thorough.
> Mathematicians might not call a particular intermediate result a lemma.
> But we logicians can---if we are so minded when formalizing---do that
> on their behalves, in the interests of complete formalization.
>
> Because proofs are finite, there will be an end to this 'lemma
> interpolation' process. It can be carried out to the point where---in
> the context
>
> Y1:L. and L,Y2:T
>
> ---the lemma L is not both the conclusion of an introduction (or step
> of Classical Reductio, or of Dilemma) in the proof of the sequent Y1:L
> and the major premise of an elimination in the proof of the sequent
> L,Y2:T.
>
> This means that all the proofs of the sequents thus identified will be
> in normal form (if formalized as natural deductions) or cut-free (if
> formalized as sequent proofs).
>
> The process of fully formal, rigorous codification of the proofs the
> sequents one has identified can now proceed in (Classical) Core Logic.
>
> There will be no exponential explosion. For no cuts-within-proofs will
> be performed, and therefore no cut-elimination will need to take place.
>
> Moreover, the core proofs involved will actually be shorter than the
> normal proofs in Gentzen-style natural deduction, or, respectively,
> shorter than the cut-free proofs in the sequent calculus. This is
> because the Gentzenians will be insinuating unnecessary steps of EFQ in
> the former, and unnecessary steps of Thinning on the Right in the
> latter. The Core logician, by contrast, can simply cut to the chase
> (please excuse the pun) by applying the Core rules, which do not need
> those "toppings up" supplied by EFQ (resp., by Thinning on the Right).
>
> Once again, I wish to emphasize: Classical Core Logic is adequate for
> the fully rigorous formalization of the classical mathematical
> reasoning that you actually find in the mathematical literature.
> Neil
>
> Neil Tennant
> Arts & Sciences Distinguished Professor in Philosophy
> Distinguished University Scholar
> College of Arts & Humanities Department of Philosophy
> 322 University Hall, 230 North Oval Mall, Columbus, OH 43210
> 614-292-7914 Office
>
> tennant.9 at osu.edu u.osu.edu/tennant.9/ [1]
>
> To appear in early 2022:
> _The Logic of Number_ [2]
>
> Land Acknowledgment:_ __The Ohio State University occupies the
> ancestral and contemporary lands of the Shawnee, Potawatomi, Delaware,
> Miami, Peoria, Seneca, Wyandot, Ojibwe, and Cherokee peoples. The
> university resides on land ceded in the 1795 Treaty of Greeneville and
> the forced removal of tribal nations through the Indian Removal Act of
> 1830._