Technology for Which Mathematical Activities?/1
Buzzard, Kevin M
k.buzzard at imperial.ac.uk
Sun Mar 7 11:38:54 EST 2021
Sorry for the delay in getting around to responding to all of this.
Response to Friedman's quoted post:
> WE ALL KNOW THAT technology help for doing mathematics is around. The
> real question is: for what kinds of things? Making completely
> unsupportable claims of prospects that are totally unwarranted,
> instead of FOCUSING ON the kinds of things that are realistic, is
> rather counterproductive with the general mathematical community.
My line of thinking is: technology help for doing mathematics is in
theory around, but in practice is totally ignored by the vast majority of
the mathematics community, despite Voevodsky, a Fields Medallist,
advertising it for 15 years at the IAS, before sadly passing away in 2017.
My concern is that the area is growing far too slowly. I think that there
is a *strong possibility* that computers will be able to compete with humans
when it comes to proofs *one day in the future* and I think that a statement
as vague as this is very appealing. The problem is that this day is not
going to happen with prevailing attitudes right now. So yes, let us focus
on things which are realistic, but we do need to remember the ultimate
goal. To be quite frank, after 25 years of my ultimate goal being "prove
more results in the Langlands philosophy" I find this new goal far far
more exciting. OK so let's focus.
> I suggest we compile and discuss a list of things, generally
> explainable, which have yielded progress and we can expect more
> progress. And say something new maybe about this. Here are some.
> 1. HELP IN formulating conjectures. NOW this is only realistic in
> CERTAIN KINDS OF MATHEMATICS. Interesting to understand which kind it
> is and which kind it is not. If it involves VISUALIZATION,...
We are already walking before we can run. These systems barely know
undergraduate level mathematics. They cannot even understand what
is happening in e.g. mirror symmetry or the Langlands program. So my
take on this one is that actually right now the goal is to *teach the computer
the definitions of the objects humans study*. This is a feasible goal.
> 2. HOWEVER: help in formulating conceptual breakthrough conjectures is
> another matter. Finding the right conjecture that explains some
> mysterious phenomena, or yields a startling revelation about a field -
> these are crucially important endeavors. For instance, Goedel came up
> with the precise statement that is now known as the COMPLETENESS
> THEOREM FOR PREDICATE CALCULUS. Then proved it. Would he benefit from
> technology to even think of the completeness theorem?
I don't know enough history of logic. Here it would seem to me that
the completeness theorem is a very natural question to ask once one
has the definitions in place. So probably the hard part was coming up
with the formal definitions of "being provable in a theory" and "being
true in all models". Definitions are really hard, because it is trivial
to generate thousands of useless ones. So this is a good example of
something which I think that right now is just a dream. But not all
profound conjectures are like this. For example the Birch and
Swinnerton-Dyer conjecture was about objects whose definitions
were already in place. Similarly the Langlands conjectures for GL_n were
postulating a relationship between two objects which already existed,
generalising class field theory. Sometimes powerful conjectures come
after definitions, sometimes at the same time. I would naively imagine
that computers will be better at good conjectures than good definitions.
> 3. AND HOWEVER, help in formulating KEY DEFINITIONS. Frege invented
> first order predicate calculus...
Exactly. Not yet realistic.
> 4. Also tell me how computer technology is going to help Solovay and
> Reinhardt lay out the fundamental large cardinal hierarchy based on
> nontrivial elementary embeddings? And tell me how computer technology
> is going to help Cohen discover forcing, or Goedel discover L?
I don't know the Solovay/Reinhardt work. Again forcing is a technique
and L is a definition. These are hard. Not all breakthrough mathematics
looks like this though. But some of it does.
However, humans reason through analogy. We adjoin elements to
fields to get from the reals to the complexes. Why not try and adjoin
things to models of set theory? Computers can be trained to reason like
that. Right now we are getting back to the key problem -- if we want
computers to come up with a bunch of new definitions then we need
to show them the thousands of old definitions we have already come up
with. What I find frustrating about the human approach to all of this is
that the computer scientists do not particularly care for the mathematics
(perhaps some care about logic; far fewer care about the Birch and
Swinnerton-Dyer conjecture, just one of many conjectures which mathematicians
regard as fundamental (cf Millennium problems) but whose *statement*
has been formalised in no theorem prover. Before we can formalise
the statement there is barely any point talking about whether computers
can help us to prove it -- this is how primitive we are right now. Again
formalising the statement is very much a focus on something which is realistic.
> 5. ESTABLISHING ABSOLUTE RIGOR. That, like 1, is where the technology
> has really made a difference. BUT doing this for a big theorem BEFORE
> it has really been well understood?
This is too high. How about doing this for an intermediate lemma whose
proof should be just some follow-your-nose calculation which of course
some of us love to do, but might take half an hour? Right now we have
two possibilities -- either check those details, or just assume it and worry
about it later. The computer offers a third possibility -- see if it can check
the result itself. This is something which I *believe* will become realistic
in the near future, but only in the areas where people invest some time
in teaching it to the computer.
A lot of these points above are of the form "human geniuses can do/
have done this -- how would computers have helped?". This is too
high. How about "human graduate students are struggling to do this --
how would computers help?". *That* is my current interpretation
of "focusing on the kinds of things that are realistic". We are not
all Friedmans. Harvey, you may not live to see computers good enough
to help you. The same is true of Tigran Petrosian and the other great
chess players of that era. However chess computers helped *me* to learn
about chess in the 1980s [I was an average club player].
> WHAT ABOUT formally verifying FLT? I have
> no doubt that this will be done by 2100, but WILL INVOLVE SERIOUS
> REWORKING AND SIMPLIFICATIONS of the proof and a lot of serious time
> commitments. And the key ideas in those preliminaries are just not
> going to be using technology.
I do not want to make any predictions of the form "X will be done
within 80 years" because my main prediction here is that I am
completely unable to predict what technology will look like in
80 years. On the other hand I am not at all convinced that the
formalisation of FLT will involve serious reworking and simplifications.
This is not how previous serious formalising efforts have gone.
There is no obstruction to formalising the current state of the Wiles/Taylor
ideas, other than time (or, equivalently, money).
> 6. FINDING ERRORS. There are, as I understand it, some interesting
> cases of this. I would like to HEAR MORE ABOUT THIS. Obviously very
> unusual, but very interesting.
> ESPECIALLY if we can use this to help
> us see SYMPTOMS OF where conventional acceptance of theorems can go
Unfortunately the answer right now is "referees are sloppy", something
which is no surprise.
That might be it for today but I promise I will get to your other posts.
-------------- next part --------------
An HTML attachment was scrubbed...
More information about the FOM