[FOM] 186:Grand Unification I

Harvey Friedman friedman at math.ohio-state.edu
Wed Jul 2 10:39:24 EDT 2003


NOTE: My FOM activity will necessarily be slowed down for the Summer 
to make time for some formal writing.

NOTE: The Grand Unification discussed here transcends the focused 
goals of interactive proof checking discussed in the last few 
numbered postings.

############################################

As stated earlier, in the plan I envisage for proof checking, the 
context strongly encourages the user to adopt LEMMA ENRICHMENT, which 
is intended to have the effect that the complexity of any proof is 
sharply minimized. Nontrivial proof ideas normally generate the 
friendly construction of additional enriching Lemmas.

Nevertheless, the user/computer interaction required even under LEMMA 
ENRICHMENT will from time to time get involved, and hence the need 
for a detailed powerful design of this interaction. We have been 
discussing this, together with supporting user interface ideas, in 
several postings on "Ideas in Proof Checking 1-4", at least for some 
simplified warmup languages.

{In the background, I have been talking to users of Isabelle, a 
highly developed interactive proof checker, to see what difficulties 
users run into, and what Isabelle does about them. At some point, it 
would be particularly valuable if a user of Isabelle, or related 
system, could talk about their experiences.}

I have been aware for quite some time of an important component in 
handling trivialities. This is the use of PACKAGES.

The idea here is that powerful special purpose theorem provers are to 
be created that handle trivialities well, in selected contexts that 
are of frequent use, AND WHERE THE USER FINDS THINGS PARTICULARLY 
ONEROUS.

The creation of these powerful special purpose PACKAGES forms an open 
ended project lending itself to deep THEORETICAL work, tempered with 
practical considerations.

Examples of the most OBVIOUS kind of packages would be
i) packages for Presburger arithmetic;
ii) packages for real closed fields;
iii) packages for algebraically closed fields;
iv) packages for special purpose methods in predicate calculus, such 
as resolution;
v) computer algebra systems.

The problem with v) is that there is no appropriately rigorous 
support. An expression may be simplified, but under what exact 
conditions is the simplification valid?

Items i)  - iii) represent infinite recursive sets. There are 
theoretical results that show that no efficient algorithms exist, 
asymptotically. However, there may well be algorithms that perform 
generally well on the SMALL instances that normally come up.

Item iv) is quite different - at least on the face of it - than i) - 
iii) because one is not looking at a sentence, but rather valid 
forms. However, right after the capital letter statement enclosed in 
#'s below, you will see that in our context, there is way of unifying 
these two, so that they are NOT different.

Valid forms and outright sentences of mathematics are to be combined 
into a single package. One should NOT do this with any heavy handed, 
less than mathematically familiar/friendly, use of higher order 
logic. Everything is to stay naturally in mathematically 
familiar/friendly contexts with a set theoretic foundation underlying 
everything.

We are now coming to the critical proper ATTITUDE to PACKAGES that I 
am promoting. That is,

a) there is NO reason to consider any infinite class of statements to 
be analyzed;
b) issues of decidability, in the usual sense of the word, are 
completely irrelevant;
c) one should concentrate on SMALL problem instances.

What does it mean to say that a large finite collection of small 
problems is DECIDABLE? Under the usual definition, it always is 
decidable. Instead, it means that there is a small efficient 
algorithm that correctly handles them.

In pondering this, and other ideas I have had about PARADIGM SHIFTS 
for mathematical logic, I see a GRAND UNIFICATION.

*****GRAND UNIFICATION*****

I see a grand unifying theme which merges all of the following into 
one giant enterprise.

1. The idea in model theory that one should study structures which 
are WELL BEHAVED, or TAME, rather than ones which exhibit hopeless 
PATHOLOGY, about which very little can be said that relates to 
anything else. The paradigm cases pushed here are where there is QE = 
QUANTIFIER ELIMINATION, which "tame" certain important mathematical 
structures, showing either outright decidability, or decidability 
relative to conjectures, or at least where the definable relations 
are very manageable. The specific paradigm cases include the ordered 
ring of integers (Presburger arithmetic), the field of real numbers 
(real closed fields), the field of complex numbers (algebraically 
closed fields), the p-adics, the field of real numbers with 
exponentiation, and some others.

2. Decision procedures generally, including computational aspects, 
implementational issues, and resource bounds, etc.

3. Proof theoretic formalisms, especially for equational reasoning, 
and quantifier reasoning.

4. Set theoretic reasoning of various kinds, including the possible 
use of large cardinals for concrete statements.

5. Potentially every relatively clean, uncluttered situation in the 
whole of mathematics.

So what is this project?

#TO IDENTIFY MATHEMATICALLY RELEVANT FAMILIES OF *SHORT* ASSERTIONS 
IN VARIOUS FUNDAMENTAL MATHEMATICAL SITUATIONS AND DEVELOP EFFICIENT 
ALGORITHMS FOR DETERMINING THEIR VALIDITY.#

I am interpreting "assertion" rather broadly here. It might be 
presented in implicational form:

Let A1,...,An. Then B.

Normally, there are free variables used in A1,...,An,B. These are, of 
course, interpreted universally.

But suppose A1,...,An,B are thought of as formulas in predicate 
calculus? I just handle the relation symbols and functions in a 
friendly way as free variables, also. If R is a ternary relation 
symbol, then

R is a ternary relation

is going to be among the A's, with R a free variable - interpreted 
universally within the underlying set theory.

Some further remarks.

a. The scope of applicability of the algorithms should remain 
flexible. If one knows that one has handled all short assertions of a 
certain kind, that should not stop one from expanding the class of 
short assertions with short assertions of a more inclusive kind 
(e.g., less short, or more primitives). Also, if one cannot yet 
handle all short assertions of a certain kind, one can still use what 
one has in the interactive proof checker. Or it may sometimes blow 
up, and one can still use it, as it may not blow up in practice. Or 
one may wish to strengthen the implementation.

b. The model theorists ALSO look to identify mathematically relevant 
families of assertions in various fundamental mathematical situations 
and develop tools to analyze them - especially quantifier 
elimination, and the analysis of definability (in the context). E.g., 
various kinds of minimality, including o-minimality. Decidability is 
usually, but not always, a byproduct.

c. However, the model theorists are looking for CLEAN generality, and 
will not investigate WILD classes of statements, where for classic 
GODELIAN reasons, things are very badly undecidable, and also where 
the definable sets are very pathological.

d. Note that b and c PREVENT model theorists from looking at hardly 
ANYTHING! They can only look at some tiny tiny tiny number of 
contexts with this ATTITUDE - compared to the number of INTERESTING 
contexts.

e. The key aspect of this paradigm shift for model theorists - and 
for ALL mathematical logicians for that matter - is to WELCOME THE 
MOST LOGICALLY HORRIBLE OF THE HORRIBLE OF THE HORRIBLE WITH OPEN 
BRAINS (as long as it is mathematically natural), for the following 
reason:

f. EVERYTHING, no matter how logically horrible, is TAME for a while, 
before it turns UGLY. Intensively investigate from the BOTTOM UP. You 
might get SURPRISINGLY far.

g. But WHY? Lots of interesting tameness uncovered, etc. But there is 
an additional major FOUNDATIONAL reason. My OLD additional 
foundational reason was already sufficiently DRAMATIC. But see the 
additionally DRAMATIC and UNIFYING reason m below.

h. I CONJECTURED that WHENEVER you climbed up the ladder in a 
logically horrible context, you would get to a BOUNDARY, where things 
start to get dicey, BEFORE they plunge into UTTER GODELIAN CHAOS. And 
what is the nature of this BOUNDARY? It is conjectured to be LARGE 
CARDINALS.

i. The principal example of this was in BOOLEAN RELATION THEORY. See

Harvey Friedman, Equational Boolean Relation Theory, 
http://www.mathpreprints.com/math/Preprint/show/

A very short statement in equational BRT involving two functions and 
three sets, is tied up with Mahlo Cardinals.

j. So I proposed a GRAND development of BRT. Look at various of the 
trillions of totally natural BRT contexts (families of multivariate 
functions and associated sets), and CLIMB THE LADDER, until maybe you 
reach LARGE CARDINALS - as I did in the context discussed in that 
reference above.

k. Then I proposed an even GRANDER vision. BRT is just one aspect of 
setting up little languages surrounding mathematical contexts and 
theorems. There one varies the parameter BOOLEAN RELATION.

l. So generally, we identify little languages associated with 
mathematical theorems, based on a PARAMETER to vary. And then look at 
what statements you get when you do the VARIATION OF INTELLECTUAL 
PARAMETERS. The conjecture is that

1) virtually everywhere you do VARIATION OF INTELLECTUAL PARAMETERS, 
it is very productive and very much connected naturally with lots of 
existing mathematics;

2) frequently, you climb the ladder, and then get into a region where 
it is LARGE CARDINALS.

m. Coming back to DRAMA and UNIFICATION and g above. We now have an 
ADDITIONAL reason for this study.

VARIATION OF INTELLECTUAL PARAMETERS and

TO IDENTIFY MATHEMATICALLY RELEVANT FAMILIES OF *SHORT* ASSERTIONS IN 
VARIOUS FUNDAMENTAL MATHEMATICAL SITUATIONS AND DEVELOP EFFICIENT 
ALGORITHMS FOR DETERMINING THEIR VALIDITY

for the purposes of developing PACAKAGES for INTERACTIVE PROOF CHECKING,

***MERGE***

into one GRAND UNIFYING subject!!!!!

The next numbered posting #187 discusses some powerful reasons to 
have this GRAND UNIFICATION in place, because of its role in another 
venture whose great importance is obvious to virtually anyone 
anywhere.

*********************************************

I use http://www.mathpreprints.com/math/Preprint/show/ for manuscripts with
proofs. Type Harvey Friedman in the window.
This is the 186th in a series of self contained numbered postings to 
FOM covering a wide range of topics in f.o.m. The list of previous 
numbered postings #1-149 can be found at 
http://www.cs.nyu.edu/pipermail/fom/2003-May/006563.html  in the FOM 
archives, 5/8/03 8:46AM. Previous ones counting from #150 are:

150:Finite obstruction/statistics  8:55AM  6/1/02
151:Finite forms by bounding  4:35AM  6/5/02
152:sin  10:35PM  6/8/02
153:Large cardinals as general algebra  1:21PM  6/17/02
154:Orderings on theories  5:28AM  6/25/02
155:A way out  8/13/02  6:56PM
156:Societies  8/13/02  6:56PM
157:Finite Societies  8/13/02  6:56PM
158:Sentential Reflection  3/31/03  12:17AM
159.Elemental Sentential Reflection  3/31/03  12:17AM
160.Similar Subclasses  3/31/03  12:17AM
161:Restrictions and Extensions  3/31/03  12:18AM
162:Two Quantifier Blocks  3/31/03  12:28PM
163:Ouch!  4/20/03  3:08AM
164:Foundations with (almost) no axioms, 4/22/0  5:31PM
165:Incompleteness Reformulated  4/29/03  1:42PM
166:Clean Godel Incompleteness  5/6/03  11:06AM
167:Incompleteness Reformulated/More  5/6/03  11:57AM
168:Incompleteness Reformulated/Again 5/8/03  12:30PM
169:New PA Independence  5:11PM  8:35PM
170:New Borel Independence  5/18/03  11:53PM
171:Coordinate Free Borel Statements  5/22/03  2:27PM
172:Ordered Fields/Countable DST/PD/Large Cardinals  5/34/03  1:55AM
173:Borel/DST/PD  5/25/03  2:11AM
174:Directly Honest Second Incompleteness  6/3/03  1:39PM
175:Maximal Principle/Hilbert's Program  6/8/03  11:59PM
176:Count Arithmetic  6/10/03  8:54AM
177:Strict Reverse Mathematics 1  6/10/03  8:27PM
178:Diophantine Shift Sequences  6/14/03  6:34PM
179:Polynomial Shift Sequences/Correction  6/15/03  2:24PM
180:Provable Functions of PA  6/16/03  12:42AM
181:Strict Reverse Mathematics 2:06/19/03  2:06AM
182:Ideas in Proof Checking 1  6/21/03 10:50PM
183:Ideas in Proof Checking 2  6/22/03  5:48PM
184:Ideas in Proof Checking 3  6/23/03  5:58PM
185:Ideas in Proof Checking 4  6/25/03  3:25AM

Harvey Friedman


More information about the FOM mailing list