statistical inference of formal systems and the formalization of the notion of disinformation

José Manuel Rodríguez Caballero josephcmac at gmail.com
Sat Oct 29 02:30:00 EDT 2022


Dear FOM members,
In relation to recent events in the news, I would like to return to the
fundamental approach to the mathematical modeling of disinformation
(including ideologies of hate and discrimination). My starting point is the
framework of cognitive warfare, where the slogan is “the target is trust”:

Claverie, Bernard, and François du Cluzel. “The Cognitive Warfare Concept.”
https://www.innovationhub-act.org/sites/default/files/2022-02/CW%20article%20Claverie%20du%20Cluzel%20final_0.pdf

As I have explained in previous emails some months ago, my model of
disinformation is like a competition among various formal systems that I
call worldviews. I do not pretend to provide a precise framework for what a
philosopher might call a worldview. I am operating at the level of toy
models, which I hope will be useful for the issue of disinformation.

As in statistics, we have multiple models, in our case “worldviews,” and we
want to infer which one is the most likely to have generated a certain
proposition (data). For example, imagine we have two competing worldviews
associated to the ideas

(i) “most cats hate dogs”

(ii) “most cats don't hate dogs”

and the proposition (data) “a dog was attacked by either another dog or by
a cat”. Since, in this toy model, we do not know how the dogs interact with
each other, this proposition is most likely generated by the model (i).

I think the key to disinformation is to provide a model (conspiracy theory)
that outperforms all socially acceptable models when it comes to
statistical inference. It is like an invasive species entering a new
ecosystem (cognition) and replacing the local species.

The good news is that this remarkable efficiency in inference power should
distinguish misinformation from genuine information. So, this could be a
heuristic trick to detect disinformation using natural language processing.

The foundational question would be how to introduce a metric to measure the
likelihood that a given proposition belongs to a given formal system? I
have the following heuristic ideas:

(a) the shorter the length of the deduction of that proposition, the higher
the likelihood of belonging to this formal system;

(b) the greater the number of independent deductions, bounded in length by
a certain constant, of that proposition, the greater the likelihood of
belonging to this formal system.

Another interesting problem would be to artificially generate a worldview
(deductive system) that outperforms in inferential power the pathological
worldview (conspiracy theoretic thinking) that we are interested in
eliminating. Given the correct formalization, this appears to be an
optimization problem. Disinformation is the new recruitment process for the
enemy's army (regardless of who the “enemy” is supposed to be, political or
social).

I found another approach to misinformation in this Russian paper (if I well
understood, this approach is algebraic, number theoretic and physically
inspired, whereas my approach is based on formal systems, statistics, and
it is biologically inspired):

Maslov, Victor Pavlovich. “Analytic number theory and disinformation.”
*Mathematical
notes* 100.3 (2016): 568-578.
http://www.mathnet.ru/links/2510ee9fab5c8a233c8d769cc1fa1a5f/mzm11383.pdf

Other references to disinformation in the recent mathematical literature:

Brody, Dorje C., and David M. Meier. "Mathematical models for fake
news." *Financial
Informatics: An Information-Based Approach to Asset Pricing*. 2022. 405-423.
https://www.worldscientific.com/doi/abs/10.1142/9789811246494_0018

Giordano, Giuseppe, Serena Mottola, and Beatrice Paternoster. "Some
mathematical aspects to detect fake news: a short review." 2020
International Conference on Mathematics and Computers in Science and
Engineering (MACISE). IEEE, 2020.

Pathak, Archita, Rohini K. Srihari, and Nihit Natu. "Disinformation:
analysis and identification." Computational and Mathematical Organization
Theory 27.3 (2021): 357-375.

Shrivastava, Gulshan, et al. "Defensive modeling of fake news through
online social networks." *IEEE Transactions on Computational Social
Systems* 7.5
(2020): 1159-1167.

Finally, I am available to collaborate on the statistical analysis of
disinformation, in case someone has developed a mathematical model on this
topic and wants to test it on empirical data (I have skills in statistics
and statistical software, but I don't have the data). My current
affiliation is Université Laval.

Kind regards,
Jose M.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: </pipermail/fom/attachments/20221029/74df0152/attachment.html>


More information about the FOM mailing list