[FOM] Talk on Formal Verification
Walt Read
walt.read at gmail.com
Fri Feb 5 15:42:34 EST 2016
Admittedly my knowledge isn't current and my last work was on neural
modeling - specifically *not* neural nets as the term is usually used,
though I did a lot of that earlier. And I hope we're not getting too
off FoM topic here. I'm mostly thinking about using AI in math and
I'll trust our moderators to keep us on track.
The problem with neural nets is that they don't really "come up" with
anything. There're a form of curve-fitting for a function, taking
inputs, say a board position, and producing outputs, say a move,
obtained by what is basically a statistical analysis of the I/O pairs
used to train them. In my experience, they're critically dependent on
representation (encoding of inputs and outputs) and, of course, on the
training examples they're given. Asking what any particular node
"means" is generally useless. There have been attempts to get around
this by using extra layers of the net to encode I/O but even then it's
been hard to see just what the net is "seeing" as relevant. So while
they can be effective, there's no "understanding" to be gotten from
them.
These techniques can work well for board games and other situations
where we only have to find a specific path along a well-defined tree.
But in getting new results in math, there's no fixed list of legal
moves. We value most that work that uses some device the rest of us
never thought of, often a variation on a known "trick" or a
non-obvious use or an unexpected combination of well-known strategies.
Neural nets may find these but we generally won't be able to extract
any insights from them. (Even small changes in the input tend to have
big changes in the intermediate node values.) The genetic
algorithm-type systems, as with Lenat or Goldberg, have more potential
for this but they are even more critically dependent on representation
than neural nets. (Re)structuring representations to support our needs
is one thing we humans are pretty good at and something we do a lot in
the course of doing science and math.
-Walt
On Fri, Feb 5, 2016 at 6:10 AM, David McAllester <mcallester at ttic.edu> wrote:
>
>
> On Thu, Feb 4, 2016 at 6:39 PM, Walt Read <walt.read at gmail.com> wrote:
>
>> I spent a good piece of my life working in AI, trying to get computers
>> to "understand" and I think there's a serious point here.
>
>
> ditto
>
>>
>> ...
>> But AI
>> requires us to write detailed, formal algorithms to generate these
>> insights, complete with input and carefully specified steps to
>> progress.
>
>
> Not any more. I highly recommend reading the alphago paper in Nature. Go is
> interesting precisely because it seems to require intuition that is not
> backed up by proof --- not backed up by anything like a complete case
> analysis. Alphago has a neural net move proposer. Could a net propose 101!
> ? It also has a neural net position evaluator. Could a neural net truth
> esitimator decide that "prime numbers don't behave like that". Of course
> alphago's networks have to be trained for the game of go --- the machine
> must "build intuition" for the domain. If we think of mathematics as a
> search for proofs it is formally equivalent to a one player game. Alphago
> is a very general game architecture.
>
> I'm not saying that alphago immediately gives a solution to automating
> mathematics. But I suspect that gradient descent on continuous computation
> --- gradient descent on analog circuits or higher level vector-valued
> computation --- can ultimately be adapted to train systems to propose
> mathematical ideas and judge likelihood of truth. Alphago is a very nice
> case study.
>
> David
>
> _______________________________________________
> FOM mailing list
> FOM at cs.nyu.edu
> http://www.cs.nyu.edu/mailman/listinfo/fom
>
More information about the FOM
mailing list