[FOM] Learnability

Timothy Y. Chow tchow at math.princeton.edu
Thu Jan 17 15:52:26 EST 2019

Victor Marek wrote:

> During a routine perusal of the site RealClearScience.com, I read the 
> piece about the article "Learnability can be undecidable", by S. 
> Ben-David, P. Hrubes et.al. The piece is published in the journal Nature 
> Machine Intelligence. In that paper the authors appear to show that 
> certain aspect of machine learning (pretty practical task) is equivalent 
> (in ZFC) to Continuum Hypothesis which is (as we know since P.J. Cohen) 
> undecidable in ZFC.

I think there is less to this result than meets the eye.

For the continuum hypothesis to even enter the picture, one must consider 
uncountable sets.  By default, in areas such as machine learning, 
computational complexity, etc., one usually assumes that the input to 
one's algorithm is given by a finite set of bits, and that the state of 
one's machine is also given by a finite set of bits.  Uncountable sets 
can't really enter the picture under these circumstances.

So how do the authors get uncountable sets to show up?  Well, they set 
themselves the task of learning something about F^*, where F^* is the 
family of all finite subsets of [0,1].  F^* is of course uncountable. 
There's nothing illegal about this, but it's a little odd.  How are we 
supposed to imagine being "given" an arbitrary finite subset of [0,1]? 
Are we somehow handed an actually infinite binary string?  We can 
hypothesize such a thing, but I find it dubious that any such analysis 
tells us anything meaningful about what we can do in practice.

In other words, their mathematical argument could be completely correct, 
but all it would tell us is that weird things can happen if we try to 
apply concepts of "computation" to uncountable sets such as F^*.  And 
that's something we already knew without any fancy theorems.


More information about the FOM mailing list