FOM: probabilistic rigor
Harvey Friedman
friedman at math.ohio-state.edu
Sat Oct 17 06:30:37 EDT 1998
I posted some initial Friedman/Odlyzko corresponedence about probabilistic
proofs on 6:31AM 10/0/98. Since then, we have continued the corespondence,
and have focused on a point of disagreement that will be of interest to the
fom.
First of all, I was able to give some clear and reasonably interesting
examples of mathematical findings that have probabilistic proofs but for
which there is no standardly rigorous proofs in sight.
My examples are of the following character. Let A(n) be a predicate on
natural numbers. One carefully chooses a large natural number t and tries
to estimate the cardinality of {n <= t: A(n)}. We require that A(n) be
very efficiently testable.
DIGRESSION. The efficiency of primality testing in most contexts, including
this one, is a red herring. This is because there are **deterministic**
primality tests which a) run quickly in practice; and/or b) can be proved
to run quickly with high probability. These tests provably never make an
error. It's just that once in a blue moon they might take a long time to
answer. Of course, this (probably) has never happened in the history of all
known universes - no blue moons.
Using elementary statistics, one runs a suitable number of trials and looks
at the data - i.e., the truth and falsity of A - and decides on an
estimate, with a certain confidence level.
It appears that this can be done satisfactorily for estimating the
cardinality of {n <= t: n^2 + 1 is prime}, and {n <= t: n and n+2 are
prime}. In the latter case, one would generate random primes <= t-2, which
is itself an interesting matter. And presumably one gets an estimate with
very high confidence level that is well beyond reach of standard rigorous
methods.
We can take this a step further. Suppose that this probabilistic method
yields, with very high confidence, a lower bound of t' on {n <= t: A(n)}.
Then we can consider the *weaker* assertion: {n: A(n)} has cardinality >=
t'. Even this weaker assertion may be well beyond the reach of standard
rigorous methods!
If t and the statistics can be done properly so that things are out of
reach of standard methods, then we really have some good things to dig our
philosophical teeth into.
HELP. I would like some of the appropriate experts on the fom to pick some
t and estimates and confidence levels, so that one actually does get
significant information that is well beyond reach by normal methods as
indicated.
Now here is the issue with Odlyzko. Odlyzko is a strong proponent of
heuristics in connection with *infinite* problems. E.g., heuristics in
connection with the twin prime conjecture that {n: n,n+2 are prime} is
infinite. Or heuristics in connection with the Riemann zeta hypothesis.
Odlyzko regards his kind of heuristics as "on a par" with the kind of
probabilistics I am talking about here.
I claim that there is a world of difference between the two. That there is
an underlying rigor behind this kind of probabilistics that can be formally
analyzed and dissected, and has a kind of universal applicability. Whereas,
there is no underlying rigor behind Odlyzko's heuristic that can - or at
least has yet - been formally analyzed and dissected, with any kind of
universal applicability. Odlyzko concedes a minute difference, but does not
yet regard it as significant.
Before I charge ahead and do some formal analysis and dissection, leading
to some new f.o.m., let me pause and see what people have to say about the
issues and about my call for HELP.
More information about the FOM
mailing list