No subject

Jeff Remmel remmel at kleene.ucsd.edu
Thu Oct 23 13:02:13 EDT 1997


I agree with John Baldwin about the foundational aspects of Matijesvic' theorem.
Certainly one way to look at Matijasevic' results is that it is a coding 
result, i.e. one can code any arbitrary r.e. set as the solution set 
of a diophantine equation.  It is a spectacular coding result but it 
is no different than the coding results which establish many other 
undecidability results such as the word problem on groups, etc. The 
foundational import is due to the work of Godel, Church, Post, Turing, 
Kleene, etc who estabilished a formal definition of computable function 
which seems to capture the intuitive notion of computable function.
If at some point, someone was able to come up with an acceptable 
computable procedure which is outside of current definition of 
computable function, then of course, Hilbert's 10-th problem would 
have to be revisted.  This is true of all undecidability results, 
many counterexamples in recursive mathematics, etc.  I think that 
the reason that Matijasevic' result is important is that it showed 
that undecidability results appear naturally in mathematics. This 
is very much what Harvey is trying to do with his results which 
show that results which require large cardinals can occur in 
setting that most mathematicians would consider as natural.

I forgot now who first brought up the idea of generally 
acceptable proof methods, but there is still considerable 
controversy about what is an acceptable proof as is witnessed by 
the controversy about computer aided proofs that has occured recently in 
the Notices.  There are some mathematicians who are very uncomfortable 
with proofs that rely on verification of computations or case 
analysis which are done by the computer.  However such proofs 
are clearly going to become much more common as the sybolic 
computation packages such as Maple, Mathematica, etc 
are becoming a research tool used by an increasing number of 
mathematicians in a variety of fields.  Then there is 
the idea of proof which can be verified by checking a small 
number of random bits which has emerged from the work on 
interactive proof checkers in computer science.  Do we 
want to accept proofs which have a greater 1 - 1\10^20 chance 
of being true?.  Is that some how different than the proof 
for the classification of finite groups which because of its 
length and complication is extremely difficult to understand 
and check?  


Jeff Remmel



More information about the FOM mailing list