Benny Applebaum

TITLE:
Basing Lower-Bounds
for Learning on Worst-Case Assumptions

ABSTRACT:
We consider the question of whether P \neq NP implies that there
exists some concept class that is efficiently representable but is
still hard to learn in the PAC model of Valiant (CACM '84). We show
that unless the Polynomial Hierarchy collapses, such a statement
cannot be proven via a large class of reductions including Karp
reductions, truth-table reductions, and a restricted form of
non-adaptive Turing reductions. Also, a proof that uses a Turing
reduction of constant levels of adaptivity would imply an important
consequence in cryptography as it yields a transformation from any
average-case hard problem in NP to a one-way function. Our results
hold even in the stronger model of agnostic learning.

These results are obtained by showing that lower bounds for improper
learning are intimately related to the complexity of zero-knowledge
arguments and to the existence of weak cryptographic primitives. In
particular, we prove that if a language L reduces to the task of
improper learning circuits, then, depending on the type of the
reduction in use, either (1) L has a statistical zero-knowledge
argument system, or (2) the worst-case hardness of L implies the
existence of a weak variant of one-way functions defined by
Ostrovsky-Wigderson (ISTCS '93). Interestingly, we observe that the
converse implication also holds. Namely, if (1) or (2) hold then the
intractability of L implies that improper learning is hard.