[FOM] Solution (?) to Mathematical Certainty Problem
Robbie Lindauer
robblin at thetip.org
Fri Jun 27 21:58:29 EDT 2003
Professor Friedman wrote:
> "agreed on these fundamental aspects of the setup, so that ther eis
> any point in continuing the discussion."
We agree that there could "in principle" be a computer which generates
an arbitrarily detailed proof wherein each step is "certain and
obvious", that we have validated the machine physically and
functionally. We also agree that there would be no point in
disagreeing with this computer, it clearly would do math and logic
better than me (except perhaps for when it is asked to validate itself).
Our disagreement appears to be on whether the "kind of certainty"
generated by such a machine would be of the same kind as our own
intuitive proof that "2 + 2 = 4" say:
..
..
____
....
It's been said that betting behavior is an indication of the level of
commitment that someone has to a belief. I would bet ANYTHING that 2 +
2 = 4, my own life, etc. No matter how good a computer you make, I
won't bet that any arbitrary answer it gives after a certain level of
complexity that is beyond my own merely human ken, will be right. My
guess is that you would stop somewhere too, but probably at a slightly
higher level of complexity than I would. My assertion is that while
there is no absolute barrier ("this far, no further") there are
nevertheless clear cases where we just wouldn't trust a computer no
matter how well-made, because we couldn't verify it's activity.
We can call this a "bias toward verification" - to be certain is not
only to intuit the correctness of a proposition, but also to be able to
verify its truth (either by proof or observation).
The extension from one deduction, no matter how simple, to a chain of
deductions of arbitrary length is unlikely to produce certainty for
practical reasons - we forget whether we were on step 200,021 or
200,201 and now don't know where we left off and then it's time to go
to bed. The next morning, the certainty that we'd achieved seems to
wear off and perhaps we have to start over. There are, therefore,
likely to be proofs of this kind about which we will never become
convinced.
Say the validating machine has to accurately read a googleplex of steps
and never make a mistake in their processing. Even if a physical
machine could be made to do this (clusters of routers apparently can do
something like this) the apparatus used to make them do it is very
complex and ends up introducing more possible failures. In any case,
what we become certain of is that the validating machine produced for
us a validation result, not that therefore the proof must be right.
Maybe someone switched proofs mid-sequence while we stepped out and
forged the report's header (or something like that). Lots of things
can go wrong in a few hours, let alone a few days, months or years.
I guess at basis what I'm recognizing is that this weak statistical
reasoning - that we might be able to produce very very good machines -
is not the kind of thing that we would call "absolute certainty". It's
something else. And it's definitely not what we call mathematical
certainty or logical certainty.
If the steps involved in creating the machine themselves involve some
risk, at each step we add more risk. By the time we're done, we can
have a 98% chance of success, perhaps we could measure the success rate
of such machines to see how well-validated they are. While we can
practically build systems to make this problem irrelevant usually,
there's no precedent for saying that we could make one absolutely
perfect.
In particular, in the case of un-checkable arguments, we would NEVER be
able to check whether or not our validation process on the hardware was
sufficient. We'd then have to argue something like "It multiplied
correctly in every case with numbers less than 100,000, but we aren't
able to test very large numbers, but we assume they're correct because
the way that it multiplies is perfect." We would then use the machine
as the standard for such large calculations. But we'd never be able to
verify it but by making another unverifiable machine.
We might think of the internet as such a proof machine - the routing
systems deliver billions upon billions of bits correctly every day.
There are redundancies and fail-overs at every step. Yet yearly a very
large number of bits are incorrectly routed. That the likelihood of
any particular set of bits reaching its destination is near 100%, it is
never certain that any "next" set of bits will make it to their
destination. But I don't join you in thinking that there might be a
"quantum of certainty" - a final tick on the -> 100% function. You
might be able to say something like "I'm as confident in this computer
as I am in the laws of physics on which it was based".
But how confident are you that our rendition of the laws of physics are
likely to produce machines that work the same way every time?
Compare this to your certitude that 2 + 2 = 4.
Best,
Robbie
More information about the FOM
mailing list