[FOM] Metamath Solitaire
Mario Carneiro
di.gama at gmail.com
Thu Sep 10 17:34:29 EDT 2015
On Thu, Sep 10, 2015 at 12:10 PM, Tennant, Neil <tennant.9 at osu.edu> wrote:
> My own suggestion is that *fully automated* deduction is better off
> proceeding within a core system. If you are a constructivist, use a
> proof-finder for Core Logic; if you are a classicist, use a proof-finder
> for Classical Core Logic. At the propositional level, the proof-finder for
> Core Logic suffices for both tasks, because in order to prove a classical
> result X:A, it suffices to prove X,~A:# in Core Logic, and then perform a
> terminal step of Classical Reductio. That, however, might not be optimally
> efficient, since at the propositional level Classical Core Logic (like
> Classical Logic) is (co-)NP-complete, whereas the decision problem for Core
> (like that for Intuitionistic Logic) is PSPACE-complete. So the trick of
> merely terminal classical steps comes at the price of some increase in
> complexity.
>
If your interest in Core logic is in its usefulness to fully automated
deduction by pruning the search space of irrelevancies, you may also be
interested in the algorithm used in Metamath Solitaire (java applet
available at http://us.metamath.org/mmsolitaire/mms.html). Roughly
speaking, one is only allowed to make the "most general" substitution
instance of a given rule. For example, starting from a theorem A and
applying right or-introduction we can conclude A \/ B but not A \/ (B /\
C). These steps are later "edited" during the unification algorithm if a
specification becomes necessary. For example, consider this proof of the
law of identity (note that Metamath Solitaire, unlike Core, is a Hilbert
system):
1. A1: A->(B->A)
2. A1: A->((B->A)->A)
3. A2: (A->((B->A)->A))->((A->(B->A))->(A->A))
4. MP(2,3): (A->(B->A))->(A->A)
5. MP(1,4): A->A
The only thing needed to construct this proof is the sequence of rule
applications A1, A1, A2, MP, MP; all the substitutions are automatically
discovered. After the first three steps one would have
1. A1: A->(B->A)
2. A1: C->(D->C)
3. A2: (E->(F->G))->((E->F)->(E->G))
because none of the substitutions are related to each other. After step 4,
the application of MP demands that C->(D->C) and E->(F->G) be the same, so
E=C, F=D, G=E, and the result is
1. A1: A->(B->A)
2. A1: C->(D->C)
3. A2: (C->(D->C))->((C->D)->(C->C))
4. MP(2,3): (C->D)->(C->C)
In the last step the unification results in C=A, D=(B->A), resulting in the
original proof.
The practice of waiting until the last minute to decide the substitutions
means that it becomes a game with finitely many moves, and finitely many
proofs for any fixed length. And although I haven't heard it described in
these terms one could also see the relevance property here, since one has
A, ~A |- B where "B" is a literal substitution variable which operates
similarly to the # of Core (one cannot prove A, ~A |- B /\ C directly,
although this may appear as a subproof of a longer proof).
N. Megill and M. Bunder, Weaker D-Complete Logics, J. IGPL 4 (1996), pp.
215-225. http://us.metamath.org/downloads/weakd.pdf
Mario
-------------- next part --------------
An HTML attachment was scrubbed...
URL: </pipermail/fom/attachments/20150910/66d26d99/attachment.html>
More information about the FOM
mailing list