[FOM] Use of Ex Falso Quodlibet (EFQ) (or ECQ)--reply to Michael Finney

Michael Lee Finney michael.finney at metachaos.net
Tue Sep 8 12:12:08 EDT 2015


Neil,

Thank you. My search did NOT turn up that web page (try searching for
"Core Logic" and there is a tremendous amount of junk that shows up
that is not useful). The only page I found only had a couple of the
papers. I have downloaded the papers you suggested (and a few more).
After I read them, I can reply more appropriately.

Otherwise, please note that my objections to natural deduction are NOT
to the use of sequents (at least those with a single formula on the
right). A sequent is essentially the same thing as a rule. Many
substructural logics require not only rules but meta-rules. Brady uses
both, although he only includes them as extra logical instead of as a
fundamental part of his systems directly covered by the proof rules.
Even some axiomatizations of classical logic use rules. A Hilbert
proof theory is not restricted to simple formulas.

My objection is to the horrific notation that spreads even a small
proof all over the page. What is wrong with a nice, linear format
(which, of course, contains an embedded tree)? A few extra points here
and there to delineate parts of the proof can certainly help. But, it
should also be noted that a notation convenient for proof theory is
not necessarily the best notation for actually using a logical system in a
formal manner. When a practicing mathematician needs to add a bit of
formality to a proof, do they use natural deduction? Not that I've
seen. They show a step or two using a linear format. It is like the
difference in programming in COBOL or C. For the most part, shooting
your foot off is preferable to programing in COBOL (not that C is the
last word in programming languages). But sometimes, even C won't do
and you need Assembler.

I have a similar notational objection to the common presentation of
Category Theory. Why, all of a sudden, do we need diagrams in our
proofs? Where is that either essential or even desirable? How does it
help? It mostly appears to make what should be a simple subject more
difficult. It is similar to the difference in notation that Hughes &
Cresswell used for modal logic and the cleaner syntax that is more
commonly used now. At least Hughes & Cresswell had the excuse of being
limited to typewriters. Icons on GUI desktops are similar. If you have
two or three that might be ok. My desktop has over 200 and they are
virtually indistinguishable. What is wrong with words (which are there
anyway)? The icons just waste my screen real estate. Just because
someone came up with a huge, page wasting notation that you can't even
type doesn't mean it should be used. Calling it "natural" doesn't make
it so.

If a proof system is rigorous then it can be automated, that is
trivial and not especially interesting. Strategy in proof search can
be adapted to any formalization. Any proof system can be used in proof
assistants, proof verifiers, etc.. Dealing with quantification,
especially second order, is much more difficult than adapting the
internal representation to one proof system or another.

I can't disagree about the Sheffer stroke or the complexities
inherent in Principia Mathematica. But formalizing classical logic in
one (or two or three) axioms isn't really of interest in substructural
logics. We want to pick things apart to understand them better. Russell
& Whitehead's formalizations were very awkward in general and predated
cleaner approaches.

Also, needing 50 lines to prove something isn't because of Hilbert
proof rules. For example, one system I have does not have any
distribution rules as axioms. That is because they can be derived. But
that derivation is not simple and needs over 40 proof steps. That is
because the properties every operator are broken down so that they can
be investigated independently. That means that something that needs
only a few lines of proof in a less detailed system requires much more
because the larger properties have to be built up from the fragments.

If you are more interested about what you can prove about a logic then
perhaps proof theory is a driving factor and the big picture is more
important. But, if you want to build something that actually works and
can replace the conventional foundations of mathematics, you need to
see what happens in detail so you can see how to modify the design of
the logic. I have created and discarded dozens of different logics
(most of them relevant) in that pursuit. I don't analyze each one to
death. I examine it enough to see how it works and doesn't work in
practice. Where will it fail in proving basic set theory theorems? To
me, proof theory is secondary to usefulness. Once you have something
that works, then it can be analyzed from a proof theoretic basis.
There is no end to that. The best way to learn about a logic isn't to
prove things ABOUT it, it is to prove things WITH it. Get your hands
"dirty" and see where the rough edges are.


Michael Lee Finney


>   
> "I have been looking on the web have only found a few papers on the 
> "Core Logic" or "Core Classical Logic". Worse, what I have found are 
> entirely based on natural deduction and none of them define the 
> complete systems or their justifications." 

>   
> Have you looked at any of the papers that can be downloaded from  
> http://u.osu.edu/tennant.9/publications/ ? 
> Articles 6, 7, 10 and 15 define Core Logic. 
> The rules need no justification; they are self-justifying. 

>   
> "Can anyone point me to a basic paper (or web presentation) using 
> normal Hilbert axiomatizations of "Core Logic" and "Core Classical 
> Logic" (hopefully with justifications for the chosen axioms)?" 
>
> Neil Tennant 



More information about the FOM mailing list