Disinformation and proof length

Vaughan Pratt pratt at cs.stanford.edu
Fri Jul 8 14:47:34 EDT 2022

To address my question of what information is contained in disinformation,
Jos'e Caballero suggests using proof length in a Bayesian framework, based
on the premise that a proposition P with a shorter proof and/or more proofs
comes across as more probable than ~P.  As prior he takes them to be
equally likely, and considers two kinds of disinformation: maintain
equiprobability (confusion, as in the celebrated fear-uncertainty-doubt or
FUD strategy attributed early on to IBM salesmen), and argue against the
truth (recruitment, perhaps better called dissuasion, popular in climate
denial etc.).

Whereas Bayesian statistics raises the question of what prior to start
with, frequentist statistics raises the question of what p-value to use in
deciding when to accept or reject ~P as the null hypothesis.

Machine learning (ML) has evolved to prefer Bayesian statistics on the
basis that it is a better match to the data accumulation process, allowing
the ratio of probabilities of P and ~P to fluctuate indefinitely.  However
in practice timely decisions are needed, and ML could equally well have
evolved to prefer frequentist statistics on that basis, with an evolving
p-value, suitably initialized, in place of an evolving prior, suitably
initialized.  So I don't see one or the other as shedding more light on my
original question as to whether disinformation contains more information
than the conventional wisdom.

(That said, from the Bayesian viewpoint what I was calling the conventional
wisdom (say P, or at least some probability favoring P) would be a more
appropriate prior than zero knowledge of P vs. ~P.  Disinformation would
then be additional data designed to skew the probability ratio in favor of
~P.  But frequentist statistics can manage that scenario too, just in its
own way.)

My original point was that Shannon information associates more information
with less likely events.  This only works provided the probabilities remain
fixed as events occur: Shannon information does not cater for shifting
probabilities, unlike ML.  So far no one has offered a compelling argument
against this Shannon-based point of view.

Let me now make a new (additional) point.  On the assumption that
disinformation is less likely than the conventional wisdom, Shannon
information may make disinformation more appealing than the conventional
wisdom because the audience feels it is getting more information.

Why watch ABC, CNN, etc. while feeling that you aren't learning much when
Fox News seems to be offering so much more information?

Vaughan Pratt
-------------- next part --------------
An HTML attachment was scrubbed...
URL: </pipermail/fom/attachments/20220708/71faa9b9/attachment.html>

More information about the FOM mailing list