TITLE: Expander codes, compressed sensing matrices, and Euclidean sections of L_1
SPEAKER: Venkatesan Guruswami
University of Washington & Institute for Advanced Study
ABSTRACT:
Classical results in high-dimensional geometry state that in a random
subspace of R^N of dimension O(N), the L_1 and L_2 norms of every
vector differ by a Theta(\sqrt{N}) factor (so each vector has its mass
spread nearly evenly over a constant fraction of the coordinates).
Indeed, this is a particular example of the use of the probabilistic
method, a technique which is ubiquitous in asymptotic geometric
analysis. Similar randomized geometric constructions arise in areas
like dimensionality reduction and compressed sensing, but it seems that
such objects are very hard to come by explicitly.
We will describe constructions inspired by expander codes that can be
used to *explicitly* produce subspaces with distortion nearly as good
as a random subspace. Specifically, we construct an explicit subspace
X of R^N of dimension N(1-o(1)) such that every unit vector in X has L_
1 norm at least (almost) sqrt{N}/poly(log N). We will also discuss
connections to sparse signal recovery (i.e., compressed sensing). The
technical ingredients in our work include Ramanujan graphs, sum-product
theorems in finite fields, and Kerdock codes/mutually unbiased bases.
[This is joint work with James Lee and Alexander Razborov]