[FOM] AI Challenge

Joe Shipman joeshipman at aol.com
Thu Sep 28 17:11:56 EDT 2017


"Deep learning" AIs now learn general games by playing against each themselves. NIM is a superb test case because correct play involves a global criterion that is very sensitive to any local variation. Because you don't want it exhaustively searching all the positions and all the simple rules, you need to give it positions involving dozens or hundreds of stacks whose sizes are not simply given in binary--either unary or coded into a base-n alphabet for some n. These are still easily solvable by human players.

-- JS

Sent from my iPhone

> On Sep 28, 2017, at 7:21 AM, Harvey Friedman <hmflogic at gmail.com> wrote:
> 
> Around March 1, 2017, I formulated a challenge to the AI community in
> light of the recent breakthroughs in and around "deep learning".
> 
> But replace mathematicians?
> 
> Here is my AI Challenge.
> 
> In order to not get horribly slaughtered by humans in the game of NIM,
> the computer is going to have to play NIM perfectly -- since humans
> have figured out a nice winning strategy for the two person zero sum
> NIM.
> 
> Can a computer figure out how to play NIM perfectly? Of course, it is
> not being given the winning strategy for NIM.
> 
> So there is the question of just what information we want to give the
> computer before we ask it to play NIM.
> 
> I would say that there is some basic infrastructure of game theory
> that it should be given - i.e., where it knows about game trees.
> 
> To behave like a real mathematician discovering the winning strategy
> for NIM, we can't really give the computer an enormous number of games
> where one side is playing perfectly and the other side putting up a
> lot of resistance - (not clear what that should mean for NIM, an
> interesting question maybe). HOWEVER, maybe we should explore what
> happens when the computer is given such a huge database of games. It
> will of course on its own generate its own games, but not with one
> player playing perfectly (until of course it figures out what is
> really going on, which is the issue).
> 
> OK, let's do some toy examples. Let's look at some really trivial games.
> 
> 1. NIM with exactly two rows only. I assume the technology is more
> than adequate to get the computer to play this perfectly, where it
> generates its own game trees from scratch. It might be interesting to
> develop some nice theory about how to discover the winning strategy by
> computer here.
> 
> 2. Another trivial game where I think the computer should not have
> much of a problem given very little information is NIM with one row,
> but where players are only allowed to take 1 or 2 stones away. Again,
> would be interesting, probably, to have a clean theory of how the
> winning strategy is discovered easily by computer.
> 
> Of course, we can really jump the shark by asking AI to find proofs of, e.g.,
> 
> A. n^2 = 2m^2 has no solution in integers.
> B. For all integers n there is a prime > n.
> 
> It seems like AI is progressing no nicely because it finds ways of
> doing things that are apparently not the way we do them. (This can be
> argued - perhaps according to some the deep learning is in some
> important ways a kind of primitive imitation of some important ways we
> do things). So the natural question is: what kind of things force AI
> to do the way we do them? It apparently has found different ways to
> play checkers, chess, go, and these different ways are better than the
> ways we play them.
> 
> So you may be able to clean the inside of my house, paint the outside
> of my house, go get the items on my shopping list, come back and cook
> the food perfectly at my house according to my tastes, and so forth,
> but can you do my math?
> 
> Harvey
> _______________________________________________
> FOM mailing list
> FOM at cs.nyu.edu
> http://www.cs.nyu.edu/mailman/listinfo/fom



More information about the FOM mailing list