FOM: formalizing probabilistic tests

Harvey Friedman friedman at math.ohio-state.edu
Thu Sep 17 18:15:22 EDT 1998


Cook 11:48AM 9/17/98 writes:

>Rabin's primality test is not a very good example to use in discussing
>the quality of randomness needed for a probabilistic algorithm...
>... but by Miller's
>result it seems reasonable to conjecture that the set of primes is in P.

But I still think that the Miller/Rabin primality test is a good example
until, say, the ERH (extended Riemann hypothesis) is proved. But thanks for
the discussion of the other example.

This kind of reasoning represents a different form of rigor and can be
metamathematically investigate - with some care.

Suppose we start with a property P(x) of bit sequences x of length 10,000.
I then generate a bit sequence y of length 10000 that I am comfortable
with. E.g., I shuffle a new deck of cards 20 times - with various kinds of
shufflings - and pick out card number 26 from the top, to generate one bit
- 0 if red; 1 if black.

I will have a lot of confidence that P(y) implies "there are at least
2^9,000 x's such that P(x)." And this will be sufficient to formally derive
the results indicated by probabilistic tests. In informal probabilistic
terms, the "chances" that this implication is wrong is 2^1,000, which is
pretty small.

A typical question that comes up is: what kind of new mathematical facts
can be "established" by this method? In particular, can you solve the
continuum hypothesis, or prove the consistency of large cardinals this way?

I can prove that you cannot. It is almost certain that any theorem you
establish this way over ZFC can be established over ZFC without resorting
to this.










More information about the FOM mailing list