disinformation in a formal system as exploitation of the undecidability and proof complexity

José Manuel Rodríguez Caballero josephcmac at gmail.com
Sat Jun 25 13:45:46 EDT 2022


Vaughan Pratt wrote:

> Recently Rohit Parikh suggested to me that disinformation was not
> information.  As I've always considered disinformation about any given
> proposition to be less likely that the conventional wisdom about it, it
> seemed to me that with Shannon's information theory, a less likely message
> contains more information than a more likely one.  Hence in particular
> disinformation should convey more information than the conventional wisdom.

and asked:

> Is there a foundational way of approaching these seemingly conflicting
> notions of information that isn't too wildly ad hoc?


There are conceptual distinctions among disinformation, misinformation, and
malinformation:

> Misinformation refers to false information that is not intended to cause
> harm.
> Disinformation refers to false information that is intended to manipulate,
> cause damage, or guide people, organizations, and countries in the wrong
> direction.
> Malinformation refers to information that stems from the truth but is
> often exaggerated in a way that misleads and causes potential harm.


Reference:
https://cyber.gc.ca/en/guidance/how-identify-misinformation-disinformation-and-malinformation-itsap00300

Gille Deleuze was a 20th-century French philosopher strongly inspired by
mathematics. He was criticized because his philosophical version of
mathematical concepts is not always equivalent to the original mathematical
formulation and his response was always that it was not necessary: he was
aware that the mathematically inspired philosophical concepts are not the
same as mathematical concepts. That said, I will try to answer Pratt's
question in the framework of Deleuze's course: "Appareils d'État et
machines de guerre"

audio:
https://www.youtube.com/playlist?list=PLATazQ-QShe-JfDXbmYOXumDjC4BhlLjM

text:
https://deleuze.cla.purdue.edu/sites/default/files/pdf/lectures/fr/ATP%20V-13b-StateApp-250380%20Fr.pdf

My starting point to formalize the notion of disinformation is this quote
from Deleuze's course:

> Troisième rubrique, on a vu : « la question des modèles de réalisation
> dans une axiomatique », à savoir les modèles de réalisation, dans une
> axiomatique mondiale du capital, étant les États euxmêmes, d’où la question
> dans cette troisième rubrique : en quel sens [40 :00] peut-on dire que ces
> États, que les formes diverses d’État, sont isomorphes ou non par rapport à
> l’axiomatique, avec dès lors toutes sortes de bipolarité : bipolarité entre
> les États du centre, seconde bipolarité entre États capitalistes et États
> socialistes-bureaucratiques, troisième bipolarité entre États du
> centre-États de la périphérie ? Bon, on en était là


My non-literary English translation:

> Third heading, we have seen: "the question of the realization of a model
> in an axiomatic system", namely the realization of  models, in a world
> axiomatic of capital, being the States themselves, hence the question in
> this third heading: in what sense [40:00] can we say that these States,
> that the various forms of State, are isomorphic or not with respect to the
> axiomatic system, with consequently all sorts of bipolarity: bipolarity
> between the States of the center, second bipolarity between capitalist
> states and socialist-bureaucratic states, a third bipolarity between
> central states and peripheral states? Alright, there we were.


For Deleuze, a "kind of state" is determined by an axiomatic system, e.g.,
capitalist (kind of) states, socialist-bureaucratic (kind of) states. But a
concrete state is just a model of this axiomatic system, e.g., the US is a
model of a capitalist state, but it is not necessary the same model as
France, even if they share the axioms of a capitalist state. Now, the
notion of disinformation enters: because the typical citizen (Alice) is
aware of the axioms of the kind of state in which she lives, but may ignore
the details of the particular realization of this model, a malicious
disinformation agent (Eve) can try to create a narrative to convince Alice
that the particular realization of the state in which she lives is not the
real one. To do so, Eve's narrative will keep the axioms of the kind of
state in which Alice lives as a requirement to be credible, and will only
play with the undecidable propositions. Therefore, a measure of
disinformation could be the rate of undecidable propositions in a given
narrative. If this rate is high, then the text is more likely to be
disinformation. The threshold for the rate of undecidability concerning the
distinction between "information" and "disinformation" in a text should be
determined empirically via statistics.

The problem is how to detect whether a given proposition in a narrative is
undecidable with respect to the axiomatic system of a kind of state. To do
so, the first step is to extract the formal system from the text (data
cleaning in natural language processing). Now, with the formal system at
our disposal, the task is to detect which propositions are undescidable. In
general, this task is uncomputable, but in the particular case of an
axiomatic system corresponding to a kind of state, this task may be
decidable. Even if it is undecidable, some heuristic rules could be applied
to find the undecidable propositions, e.g., try to prove the proposition or
its negation using an automatic reasoning tool such as SMT (Satisfiability
Modulo Theories) solvers. If it doesn't work, just declare the proposition
to be undecidable.

This is just a first-order approximation to the problem of disinformation.
A more elaborate solution may include detecting false propositions such
that the length of the proof of their falsehood is so large that the
typical citizen, who is not capable of following 10 steps of reasoning, one
after another, perceives it as undecidable. The main hypothesis of
cognitive warfare is that an overloaded brain, maybe due to mental fatigue
after working many hours, can only follow short proofs, e.g., up to 3
steps. Therefore, the notion of proof complexity is fundamental for the
theory of disinformation in formal systems.

Beyond the Deleuzian framework, here are some references related to
the exploitation of the human's limited capacity of following many steps in
reasoning (this is not the only factor used in cognitive warfare, but it is
one of the most important):

Social Laser:
- paper: https://royalsocietypublishing.org/doi/full/10.1098/rsta.2015.0094
- book:
https://www.amazon.com/Social-Laser-Application-Information-Processes/dp/981480083X
- lectures: https://youtu.be/Hm8fEhqZBdk

Cognitive Warfare (innovation hub):
-  Claverie, Bernard, and François du Cluzel. "The Cognitive Warfare
Concept."
https://www.innovationhub-act.org/sites/default/files/2022-02/CW%20article%20Claverie%20du%20Cluzel%20final_0.pdf
- Open Innovation - Cognitive Warfare use case:
https://youtu.be/xrnjcCgO19I?t=352

I am personally interested in the subject from the point of view of
biostatistics. More precisely, I find it interesting to measure the ability
of a human being to follow a large chain of reasoning in different
circumstances, e.g., under stress, under mental fatigue, with a fresh mind,
etc. My inspiration for this research is Gromov's quote:

> what is common between mathematicians and schizophrenics: only these two
> people may trust a chain of 10 consecutive arguments. In life, one or two
> is enough, you know, the chain breaks down


https://youtu.be/buThBDcUYZI?t=1180

Kind regards,
Jose M.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: </pipermail/fom/attachments/20220625/8c9c8d9b/attachment-0001.html>


More information about the FOM mailing list