[FOM] AI Challenge

Patrik Eklund peklund at cs.umu.se
Fri Sep 29 02:05:50 EDT 2017

Dear Harvey,

Below some comments. Let me here also say that it is a very important 
topic. There is stuff to discuss and also meta-stuff involved. As AI, 
based e.g. on math, is also very much applied, like e.g. in health, 
meta-stuff may be at least meta-math but also meta-health, or maybe 
meta-AI, whatever that might be.

Comments indeed below.

On 2017-09-28 14:21, Harvey Friedman wrote:
> Around March 1, 2017, I formulated a challenge to the AI community in
> light of the recent breakthroughs in and around "deep learning".
> But replace mathematicians?
> Here is my AI Challenge.

My seat belt is fastened.

> In order to not get horribly slaughtered by humans in the game of NIM,
> the computer is going to have to play NIM perfectly -- since humans
> have figured out a nice winning strategy for the two person zero sum
> NIM.
> Can a computer figure out how to play NIM perfectly? Of course, it is
> not being given the winning strategy for NIM.


Before reading onwards, at this point my thinking is tuned to 
distinguish between "mind" and "machine". When we say artificial 
intelligence, we may want to distinguish between an artificial mind and 
and artificial machine. The latter is easier to model, isn't it? 
Automata discussions and so on. Around 1960+/-'s, control of dynamics 
and automation of computation were both systems and some thought the 
framework should be the same. Monoidal categories were around already at 
that time, so category theory as a meta tried out something (Budach, 
Ehrig, Goguen, Manes, ...). Maybe even Lotfi Zadeh was scratching these 
surfaces. We owe it to Lotfi to try to find out. Lotfi often underlined 
natural language.

> So there is the question of just what information we want to give the
> computer before we ask it to play NIM.

Now you're cookin'!


Who's WE? What's GIVE? Information I understand better at this point, 
seat belt still fastened.

> I would say that there is some basic infrastructure of game theory
> that it should be given - i.e., where it knows about game trees.

We've tried to suggest (can't remember if I've tried also at FOM), that 
logics "communicate", that is, a logic is a categorical object, and 
there are morphisms between objects. So chess player 1 and chess player 
2 may not share the same logic, even if they might both accept the Axiom 
of Choice. Similary, a doctor and a nurse might not share the same logic 
within a care pathway, even if they might both accept principles for 
choices of various kind.

> To behave like a real mathematician discovering the winning strategy
> for NIM, we can't really give the computer an enormous number of games
> where one side is playing perfectly and the other side putting up a
> lot of resistance - (not clear what that should mean for NIM, an
> interesting question maybe). HOWEVER, maybe we should explore what
> happens when the computer is given such a huge database of games. It
> will of course on its own generate its own games, but not with one
> player playing perfectly (until of course it figures out what is
> really going on, which is the issue).

What do you mean by REAL in real mathematician? Is this REAL like in 
real and complex analysis, or real numbers and their operator versus 
complicated algebra as underlying syntactic and semantic structure in 
logic? The former is a bit more set theory in analysis. The latter a bit 
more category theory in logic.

The "huge database of games" I find perhaps as the most important thing 
here. Deep learning underlines BIG DATA, but it's really about COMPLEX 
STRUCTURE. IBM (see also below) sells Watson that hides methodology for 
not so analytically well versed end-users, but Watson does not create 
structure. It computes but does not infer. Untyped data quickly becomes 
big, but analysis and learning doesn't unravel types. In health, data 
must be typed, otherwise computation is just mean values and hypothesis 
testing providing "evidence" in evidence-based medicine.

> OK, let's do some toy examples. Let's look at some really trivial 
> games.

Yes. Good.

> 1. NIM with exactly two rows only. I assume the technology is more
> than adequate to get the computer to play this perfectly, where it
> generates its own game trees from scratch. It might be interesting to
> develop some nice theory about how to discover the winning strategy by
> computer here.

What is STRATEGY? We need that before we can formalize DISCOVER 
(strategy). If we are practical and if we are in converntial and 
engineering deep learning, we can learn structures. Neural nets only 
learn parameters, but clustering and Bayesian can learn structures. I.e, 
we can generate statements (sentences) from data (expressions or terms). 
But strategy is about entailment, and deep learning does not have 
strategy learning in its portfolio. Deep learning doesn't have logic in 
its portfolio so it doesn't even know what entailment is.

> 2. Another trivial game where I think the computer should not have
> much of a problem given very little information is NIM with one row,
> but where players are only allowed to take 1 or 2 stones away. Again,
> would be interesting, probably, to have a clean theory of how the
> winning strategy is discovered easily by computer.

Changing the rules you mean? Or do you even mean creating rules that can 
change rules? Self-referentiality and meta ...

> Of course, we can really jump the shark by asking AI to find proofs of, 
> e.g.,
> A. n^2 = 2m^2 has no solution in integers.
> B. For all integers n there is a prime > n.

We can ask logic or FOM to do the same, and ask them all to sit in the 
same boat. The we rock the boat.

> It seems like AI is progressing no nicely because it finds ways of
> doing things that are apparently not the way we do them. (This can be
> argued - perhaps according to some the deep learning is in some
> important ways a kind of primitive imitation of some important ways we
> do things). So the natural question is: what kind of things force AI
> to do the way we do them? It apparently has found different ways to
> play checkers, chess, go, and these different ways are better than the
> ways we play them.

Kasparov against Deep Blue was also interesting. First he won, then he 
lost. And he lost because the second time around he applied strategy as 
he did the first time around. He asked for third match. IBM said "noup", 
and AI has never recovered. Deep Blue was probably given even more games 
to "analyze", and it simply added more to that combinatorics of 
openings. Kasparov may have believed he was already in middle game with 
Deep Blue 2, based on experience from Deep Blue 1, but he was wrong. 
Deep Blue was probably still playing opening, while Kasparov was already 
into strategies in the middle game.

I am not good at chess, but the distinction between openings and middle 
play cannot be sharp, can it. Reading chess opening books, it's 
combinatorics up to a certain point, and that point when it tips over 
from opening to middle is quite fuzzy. STRATEGY for choosing an opening 
is not the same STRATEGY as adopted in the middle. End game is just a 
"finish it up", but surrender always comes before the King falls. Trump 
has chosen some gambits to go with, and Rocket Man has studied openings. 
US-Russia relations are in an ever lasting middle game, even if 
sometimes believed to have been checkmated, and sometimes not even to 
have opened. Politics differs from chess not in changing strategies but 
in allowing to backstep is some moves turn out to be no good.

> So you may be able to clean the inside of my house, paint the outside
> of my house, go get the items on my shopping list, come back and cook
> the food perfectly at my house according to my tastes, and so forth,
> but can you do my math?

My credo is that artificial intelligence is indeed related to logic 
somehow, but is it intelligence to handle rich expressions (like 
shopping lists) and to skillfully formulate statements (e.g. for debate 
concerning recipes in the kitchen), or is it intelligence to 
articulately draw conclusions and create theories based on whatever 
expressions and statements are on the table. In the latter group, 
Stephen might be a bit hesitant about Kurt's flamboyant and dynamic use 
of the lists on the table. Kurt says a list is a list even if one is a 
list of groceries and the other is a list of conclusions of debates in 
kitchen about recipes involving items from those grocery lists. IBM, and 
many others, even Putin, says that the future is in AI. And I honestly 
believe we could help the IT industry (READ: Apple, Facebook, Google, 
IBM, Microsoft, please invite us!) to do much more. And we have to 
cooperate. FOMs, CATs, ALGs, LOGs, SYSs, even PHILs and LANGs, and many 
others could interact more, and learn from each other.

No, AI cannot do math. Can anyone? Maybe MATH is the only thing doing 
math? Math is (not exists), and it's doing it, and we're all watching, 
trying to understand what we see. Computers are not in that audience. 
Computers help us with coffee during breaks in that play. I think it's 
actually like a reverse Seven Ages, where stage and audience change 
roles. Math is in the audience, and we mathematicians play our part. 
 From unwillingly to school, through seeking the bubble reputation, 
coming to jealousy and quarrel, going to spectacles on nose, until 
second childishness and mere oblivion. In the audience, a prime, if it 
is, remains so.

> Harvey

Thanks, Harvey.

Seat belt unbuckled.


More information about the FOM mailing list