[FOM] NIM/Deep Learning
Dennis E. Hamilton
dennis.hamilton at acm.org
Mon Oct 16 12:18:53 EDT 2017
Since NIM has been analyzed and has a straightforward algorithmic means of determining winning strategies, two matters come to mind.
1. First, since it is possible to create a perfect NIM player, the straightforward thing to do is have the mechanical learning (ML) player matched against the perfect player using randomly chosen initial heaps and selection of the first mover. The perfect player, even in a losing situation, will exploit mistakes by its opponent, so this should encourage learning by the ML player. See <https://en.wikipedia.org/wiki/Nim>. Also, make clear whether the normal game (last mover wins) or the misère game (last mover loses) is intended in this challenge.
2. The setup for communication between the players and constraining the ML player to make only legal moves should be straightforward. The learning mechanism is left to be defined. There needs to be some agreement on the initial state and nature of the ML player, however.
A challenge in this case will be the prospect of complete brute-force winning. That is, nothing has to be learned. If learning is required to be demonstrated, there has to be sufficient detection of patterns in the play of games such that combinatorial explosion is curtailed and the ML player can go "deeper," improving its speed as well as its success. For an ML player equipped to discover the relationship between the longitudinal parities of the binary-expressed heap sizes, it becomes interesting to understand what is the minimum preparation the ML player must have to be equipped to make such leaps. Think of this as a reverse-mathematics-similar question [;<). Or a kind of domain-knowledge preparation.
There is a simpler limited game that may be useful for teasing out how one calibrates ML players. I am thinking of Petals around the Rose, <https://en.wikipedia.org/wiki/Petals_Around_the_Rose>. Clearly, the ML player can memorize the answers, even with a larger number of dice. The question is whether it can learn the petals relationship in some form and explain its inference. Then what has the ML player be equipped to do that? Then on to more-challenging problems and qualification of ML players.
-----Original Message (abridged) -----
From: fom-bounces at cs.nyu.edu [mailto:fom-bounces at cs.nyu.edu] On Behalf Of Harvey Friedman
Sent: Sunday, October 15, 2017 12:48
To: Foundations of Mathematics <fom at cs.nyu.edu>
Subject: Spam (11.632):[FOM] NIM/Deep Learning
My original posting on NIM and AI
some other FOM postings, and my followup posting is at
Since then, I have been in correspondence about it with Scott Anderson
and a specialist in Deep Learning, John Schulman. John has kindly
given me permission to post his correspondence with me and Scott on
FROM JOHN SCHULMAN
October 14, 2017
I think it'd be a fun project to try the latest algorithms on Nim and
see what happens. It'd serve as a good unit test for whether one has a
convergent algorithm or not.
My first thought was that it'd be really hard to define the inputs and
outputs in a way that are compatible with neural networks--variable
sized objects are always hard to deal with--but in this case there's a
[ ... ]
October 15, 2017
Thanks for getting this to me!
I had previously started a discussion of NIM/deep learning on the FOM
email list. I don't think the discussion got the attention of deep
people in deep learning (smile), but there are a lot of readers with a
casual interest and some awareness and some knowledge of deep learning
who do read FOM regularly.
By the way, the second link here at the end has some NIM variants that
I have a feeling might be "humanly deep", and perhaps while humans try
to wrap their heads around them, or such things, we can run the
machines on them in tandem, and compare their progress. Man v. machine
on them as the years go by!
[ ... ]
FOM mailing list
FOM at cs.nyu.edu
More information about the FOM