[FOM] The unbearable ghastliness of EFQ, and sundry other matters arising from Harvey's last post
Harvey Friedman
hmflogic at gmail.com
Fri Sep 4 01:27:50 EDT 2015
Let me try to respond to Chow (reproduced below), at least indirectly.
I will wear the hat of a mathematician here. There are many tactics
that we learn to use in order to aid ourselves in rather quickly
coming up with write-ups of proofs, and even pre write-ups, to some
extent, that are viewed as totally reliable, and very efficiently
useful to get something on paper that is totally convincing.
One of the many useful tactics like this is the following. in general,
a proof is a complex endeavor where in the middle of the proof you can
be in a variety of contexts that generally have to do with what you
are assuming and what you are proving or trying to prove. What you are
assuming and what you are trying to prove varies as the proof unravels
in some complicated ways, but the overall target has a fixed set of
assumptions and a fixed conclusion.
In the course of this proof, no matter how the contexts vary, you are
allowed to make certain moves.
Among these moves are that whatever it is that you are trying to prove
in that context within the proof, you consider it automatically done
if you can get a contradiction within that context. That may well be
the easiest and quickest and least mind numbing way to achieve that
goal for the context in which you are in. Sometimes this is really
easy. Other times it is not, or it is even impossible to get a
contradiction, period. So if you see an easy contradiction in a
particular context, write it down immediately, and declare that you
have established what you had set out as the goral for that particular
context.
I use this strategy all the time, particularly in complicated detailed
arguments subject to errors. Most typically, when I start to get
confused, I break whatever context I am into a relatively small number
of exclusive cases, in order to compartmentalize my work. I will know
in advance or deem it highly likely that many of these cases are going
to be trivial, and some or at least one is going to required some
thinking. This breaking into cases is really a breaking of the
original context into sub contexts, which can be treated independently
of each other, as long as I know that I am staying in my original
context, and I don't screw up and forget just what is going on in that
context.
I typically write, e.g.:
want to prove A.
case 1. xxx. normal stuff ending with A.
case 2. xxx normal stuff ending with A.
case 3. xxx This contradicts the choice of u. Done.
In case 3, I am happy to be relived of not even having to remember
what A is! Yes, I can reassemble the proof to avoid having to do this.
But that is not something that I want to have to do, and in fact it is
harder to write that down and convince everybody that I am correct
than simply doing the above. In fact, what happens in practice is that
one does not know, going in, which cases are going to be handled this
way, and which cases are not going to be handled this way. You are
comforted by the fact that you have exhausted the cases in some
natural way, and will probably emerge from the experience rather
efficiently without having to worry about correctness or expositional
problems.
The same ideas apply to cut, EVEN WHEN THERE IS NO BLOWUP. You may
assemble a proof with a lot of what I call HAVE/WANTS. Like
assumptions/conclusions. And while immersing yourself in all of these
complex HAVE/WANTS that are related properly to each other, you may
spot a HAVE/WANT and HAVE/WANT that has the right kind of overlapping
assertions so that you make a CUT. This is done all the time. It is
often the easiest way to get a convincing proof down on paper. I don't
want to even consider a system where I am required to unravel such
cuts, even if there is no blowup involved.
Actually, I am not sure that I have even seen any super friendly
formalization of mathematical reasoning. Over the years, I planned to
create my HAVE/WANT system. Yes, of course, there would be a huge
overlap with ideas that are already around, including the setups used
by the Proof Assistants. But small changes in terminology and format
could make it hugely better and more useable than existing setups. Of
course, "selling" such a super friendly system would have to be left
to my successors.
At this point, I don't see any interesting restriction on the use of
negation in real mathematical proofs other than these two:
a. Intuitionism. Here you can't just assume not of something, and get
much out of that. Most commonly, you can't just assume "not for all"
and get "there exists". There are other things like this. You can't
just assume not A, get a contradiction, and then conclude A. Or assume
not (A and B) and then conclude not A or not B. So intuitionism really
affects the way not interacts with just about everything else.
b. No Negation. I have a feeling that this is under explored. There
are now some modern tools that can come to bear on this. VERY VAGUELY,
a rough theme seems to be emerging:
And I am not excited about the idea that you can in principle
rearrange proofs to avoid certain particular features, especially when
there seems to be no need to do so, and goes against what people
actually write down.
HOWEVER, I would be interested in seeing some new restrictions on
logic like a,b that amount to something. I would even be interested in
seeing new restrictions that in fact ARE MET by all or almost all
mathematical proofs. It is possible that that would be an interesting
finding of the form: mathematicians never do this or that, and we know
that what they do is still complete.
EVEN MUCH MORE INTERESTIiNG, would be to show that mathematicians
actually work in a (natural, coherent, identifiable) system of logic
that is INCOMPLETE. That could have major implications for all sorts
of issues in f.o.m.
*The negationless theory of the ring of integers is deeply related to
the ordinary theory of the ring of complex numbers.*
Harvey Friedman
On Thu, Sep 3, 2015 at 9:29 PM, Timothy Y. Chow <tchow at alum.mit.edu> wrote:
>> From: Harvey Friedman <hmflogic at gmail.com>
>
> Harvey Friedman wrote:
>
>> The only proof you have offered of this "empty set contained in A" in
>> ordinary mathematical language read virtually identically to the usual proof
>> with explosion (EFQ), and so I still haven't seen any proof of "empty set
>> contained in A" that isn't obviously based on the idea that a contradiction
>> yields everything. I said that if I see such a thing, I will revisit this
>> whole matter.
>
>
> I haven't followed all the ins and outs of this particular discussion, but
> the debate seems to be turning on whether the following argument---
>
> Let b be arbitrary. Suppose b in {x|~x=x}. Then ~b=b. But b=b.
> Contradiction. So, if b in {x|~x=x}, then b in A. QED
>
> ---employs the idea that "a contradiction yields everything." I don't
> understand what Core Logic is, and I'm not sure I even understand what "a
> contradiction yields everything" means. But I can give a data point as a
> mathematician introspecting on what he thinks he is doing when reasoning as
> above. I think I am making an assumption, arriving at a contradiction, and
> then concluding that my assumption must be mistaken. I am not, or at least
> am not consciously, thinking about "a contradiction yields everything."
> Where is that step supposedly coming in? I guess this is a question for
> Harvey. Am I supposedly arriving at the contradiction, then making a
> further step that the contradiction yields everything, and then balking at
> accepting everything, and only then backtracking to my assumption to reject
> it? It's not possible to balk at the contradiction itself and backtrack
> immediately? We have to justify balking at the contradiction by appealing
> to its undesirable consequences, or something?
>
> Tim
More information about the FOM
mailing list