Usual coordinates:
Friday, 2:00PM
Room 1314
Warren Weaver Hall
251 Mercer Street
Fall 2013 Schedule
Friday Dec. 13
2:00PM
WWH 1314
Adam Marcus (Yale)
A solution to Weaver's $KS_2$
Abstract:
We will outline the proof that gives a positive solution of to Weaver's conjecture $KS_2$. That is, we will show that any isotropic collection of vectors whose outer products sum to twice the identity can be partitioned into two parts such that each part is a small distance from the identity. The distance will depend on the maximum length of the vectors in the collection but not the dimension (the two requirements necessary for Weaver's reduction to a solution of Kadison--Singer). This will include include introducing a new technique for establishing the existence of certain combinatorial objects that we call the "Method of Interlacing Polynomials."
This talk is intended to be accessible by a general mathematics audience, and represents joint work with Dan Spielman and Nikhil Srivastava.
Friday Dec. 6
Princeton
CCI meeting
Program
TBA
Friday Nov. 22
9:30AM -- 4:10PM
Auditorium 109
New York Area Theory Day
Program
9:30--10:00: Coffee and bagels
10:00--10:55: Prof. Noga Alon (Tel Aviv University and IAS)
Radio Networks
10:55--11:05: Short break
11:05--12:00: Prof. Adam Marcus (Crisply, LLC and Yale University)
Bipartite Ramanujan Graphs of all degrees
12:00--02:00: Lunch break
02:00--02:55: Prof. Adi Shamir (Weizmann Institute of Science and NYU)
Dissection: A New Paradigm for Solving Bicomposite Search Problems
02:55--03:15: Coffee break
03:15--04:10: Prof. David Steurer (Cornell University)
An Information Complexity Approach to Extended Formulations
For directions, please see
http://www.cims.nyu.edu/direct.html and
http://cs.nyu.edu/csweb/Location/directions.html (building 46)
To subscribe to our mailing list, follow instructions at
http://www.cs.nyu.edu/mailman/listinfo/theory-ny
Organizers:
Yevgeniy Dodis
Tal Rabin
Cliff Stein
========
Abstracts
========
Radio Networks
Prof. Noga Alon (Tel Aviv University and IAS)
Abstract: The radio network model is a standard theoretical model for
studying
wireless comnmunication. In this model communication happens in
synchronous rounds in which each node can either send a packet or listen.
Only listening nodes for which exactly one neighbor sends receive
a packet. The study of this model leads to intriguing combinatorial,
algorithmic and graph theoretic questions. I will describe several recent
results in this topic including the efficient design of shared radio
networks (in joint work with Moitra and Sudakov), and the investigation
of the broadcast problem in this model (in joint work with Ghaffari,
Haeupler and Khabbazian).
=============================
Bipartite Ramanujan Graphs of all degrees
Prof. Adam Marcus (Crisply, LLC and Yale University)
Abstract: We will outline the proof that shows the existence of
bipartite Ramanujan Graphs of any degree as well as some of mixed
degrees. Our approach uses the idea of Bilu and Linial to show that
there exists a 2-lift of a given Ramanujan graph which maintains the
Ramanujan property. This will include introducing a new technique for
establishing the existence of certain combinatorial objects that we call
the "Method of Interlacing Polynomials."
This talk is intended to be accessible by a general computer science
audience, and represents joint work with Dan Spielman and Nikhil Srivastava.
=============================
Dissection: A New Paradigm for Solving Bicomposite Search Problems
Prof. Adi Shamir (Weizmann Institute of Science and NYU)
Abstract: Most combinatorial search problems can be described by a
collection of possible states, a list of possible actions which
map each current state into some next state, and a pair of
initial and final states. The problem is to find a sequence of
actions which map the given initial state into the desired
final state. One can always solve such problems by trying
out all the possible sequences of actions, but in many
cases it is possible to exploit special properties of the states
and actions in order to lower the exponential complexity of exhaustive
search. In this talk I will introduce the new notion of bicomposite
search problems, whose solution can be partitioned
both along the action and the state dimensions, and show
that such problems can be solved with a new algorithmic
paradigm which we call dissection with greatly reduced com-
binations of time and space complexities. To demonstrate
the wide applicability of these ideas, I will show
how to improve the best known algorithms for
untangling Rubik's cube, for solving various set
partition problems, and for breaking multiple
encryption schemes.
Joint work with Itai Dinur, Orr Dunkelman, and Nathan Keller.
=============================
Approximate Constraint Satisfaction Requires Large LP Relaxations
Prof. David Steurer (Cornell University)
Abstract: We prove super-polynomial lower bounds on the size of linear
programming relaxations for approximation versions of constraint
satisfaction problems. We show that for these problems, general
polynomial-sized linear programs are exactly as powerful as those
arising from the Sherali-Adams hierarchy.
Concrete consequences are that polynomial-sized linear programming
relaxation cannot achieve better than 1/2-approximations for MAX-CUT,
7/8-approximations for MAX-3SAT, or 3/4-approximations for MAX-2SAT.
Our techniques bring together two formerly disparate lines of research.
On the one hand, there has been a recent sequence of exciting lower
bounds on the size of extended formulations for various polytopes that
arise in combinatorial optimization. At the same time, researchers have
extensively studied the power of specific hierarchies, such as the
Sherali-Adams LP hierarchy, for approximating NP-hard problems.
Joint work with Siu On Chan, James R. Lee, and Prasad Raghavendra.
=============================
"Every Day is Theory Day"
Friday Nov. 15
2:00PM
WWH 1314
Leonid Gurvits (CUNY)
Breaking e^n barrier for deterministic poly-time approximation of the permanent and settling Friedland's conjecture on the Monomer-Dimer Entropy
11:00-12:00pm: Intro to LDCs and LCCs (Shubhangi Saraf)
12:00-1:00: Lunch
1:00 - 2:00: Breaking the quadratic barrier for linear 3-LCCs over the reals (Zeev Dvir)
2:00 - 2:30: Coffee break
2:30 - 3:00: Approximate LDCs over the reals (Guangda Hu)
3:00 - 4:00: Some problems about codes (Swastik Kopparty)
Abstracts:
Title: Intro to LDCs and LCCs
Speaker: Shubhangi Saraf
Abstract:
I'll give an introduction to locally decodable codes and locally correctable codes. I'll survey the known upper bounds and lower bounds and highlight some of the most interesting challenges.
----
Title: Breaking the quadratic barrier for linear 3-LCCs over the reals
Speaker: Zeev Dvir
Abstract:
We prove that 3-query linear locally correctable codes over the Reals of dimension d require block length n > d^{2+ε} for some positive ε > 0. This improves the known quadratic lower bounds which hold for any linear code.
Our proof introduces several new ideas to existing lower bound techniques. A key concept underlying the proof is that of a ”well-spread” code, namely one in which no ”large” sub-code is contained in ”low” dimension. The proof uses a sequence of reductions that establish the lower bound over every field, so long as the code is not well-spread. If a code is well-spread (has no large subcode in a low dimension), and the underlying field is the Reals, we use a powerful geometric result of Barthe to derive a structure theorem on the decoding query triples. This structure supports (over every field) stronger random restriction arguments, and hence stronger lower bound, than previously available.
Joint work with Shubhangi Saraf and Avi Wigderson.
----
Title: Approximate LDCs over the reals
Speaker: Guangda Hu
Abstract: We consider a new model of Locally Decodable Codes (LDCs) over the reals in which the decoding can be `approximate' in the Euclidean metric. Such codes are equivalent to configurations of points in R^d in which many q-tuples span a point which is close to a standard basis vector. We show that, even with this relaxation of the decoding, 2-query approximate LDCs require exponential encoding length.
Joint work with Zeev Dvir and Shubhangi Saraf.
-----
Title: Some problems about codes
Speaker: Swastik Kopparty
Abstract: I will present some open problems about codes related to:
1. complexity of decoding random codes
2. list-decoding from errors and from erasures
3. locally testable codes
4. good codes with good dual codes
and more ...
Thursday Nov. 7
2:00PM
WWH 1314
Ali Kemal Sinop (IAS)
Towards a better approximation for sparsest cut?
Abstract: We give a new $(1+\epsilon)$-approximation for sparsest cut
problem on graphs where small sets expand significantly more than the
sparsest cut (expansion of sets of size $n/r$ exceeds that of the
sparsest cut by a factor $\sqrt{\log n\log r}$, for some small $r$;
this condition holds for many natural graph families). We give two
different algorithms. One involves Guruswami-Sinop rounding on the
level-$r$ Lasserre relaxation. The other is combinatorial and involves a
new notion called Small Set Expander Flows (inspired by the expander
flows of [ARV'04]) which we show exists in the input graph. Both
algorithms run in time $2^{O(r)} \mathrm{poly}(n)$.
We also show similar approximation algorithms in graphs with genus $g$
with an analogous local expansion condition.
This is the first algorithm we know of that achieves
$(1+\epsilon)$-approximation on such general family of graphs.
Friday Nov. 1
2:00PM
WWH 1314
Thomas Vidick (Newton Institute)
Three-player entangled XOR games are NP-hard to approximate
Abstract:
We prove an analogue of Hastad's celebrated optimal hardness of approximation results for MAX-E3-LIN2 (JACM'01) in the setting of games with players sharing quantum entanglement: for any eps>0 the problem of finding a factor (2-eps) approximation to the entangled value of a three-player XOR game is NP-hard. This is the first constant-factor hardness of approximation result for entangled games, and can be thought of as an extension of the classical PCP theorem to the case of entangled players.
The key technical component of our work is a soundness analysis of a point-vs-plane low-degree test against entangled players. This extends and simplifies the analysis of the multilinearity test by Ito and Vidick (FOCS'12). In the talk I will explain how the demonstration that this test remains sound in the presence of entanglement (which can be thought of as an additional distributed resource, strictly stronger than shared randomness) sheds new light both on the properties of the classical test itself and on those of quantum entanglement, and in particular one of its key features, monogamy.
The talk should be accessible to all; no background in quantum computing will be assumed.
Friday Oct. 25
2:00PM
WWH 1314
Ravishankar Krishnaswamy (Princeton)
Capacitated Network Design: Algorithms and Hardness
Abstract: In this talk, we will discuss the following network design problem: we are given a multicommodity demand D (or a collection of source-sink pairs each with a routing requirement), and a capacitated graph. The goal is to pick the "smallest subgraph" so that the given demand can be completely routed. This basic problem is a crucial subroutine for many algorithmic problems, such as buy-at-bulk, energy-minimizing routing, etc. Our understanding, however, is limited, even for the case of one demand pair, which is the classical s-t flow problem. In this talk, we survey what we know about this problem and present some recent algorithmic results for special cases and hardness results.
Sanjeev, Rong and Ankur will present a tapestry of current machine learning research (both theory and practice). Basically, these all involve some form of nonlinear model-fitting. We'll also present some of our own very recent work giving provable bounds for sparse coding and learning deep nets. We will also present recent or old ML-theory work by others (e.g., the best paper at ICML'13 was doing feasible versions of Hilbert basis theorem ---without Grobner bases, the approach some of you may be familiar with).
Tensor Decompositions: Uniqueness, Algorithms and Applications
Tensor decomposition -- the high dimensional analog of matrix
decomposition -- is a powerful primitive that arises in statistics,
signal processing, data mining and machine learning. Unfortunately,
most problems about tensor decompositions are either provably hard,
or not known to admit efficient solutions. We will describe two recent
results and applications:
1. We give a robust version of a classic theorem of Kruskal which
shows uniqueness of tensor decompositions under a certain rank condition.
i.e. we show that the decomposition is unique even if the entries of the
tensor have inverse polynomial error. This gives polynomial sample
complexity for learning a host of latent variable models.
2. We give efficient algorithms for tensor decomposition in the
smoothed setting. e.g. this shows that learning mixtures of
Gaussians is tractable in new overcomplete regimes.
(ongoing works by Aditya Bhaskara, Moses Charikar, Ankur Moitra, Aravindan Vijayaraghavan)
Thursday Oct. 17
2:00PM
WWH 1314
Brendan Juba (Harvard)
Efficient reasoning in PAC Semantics
Abstract: Machine learning is often employed as one step in a larger application, serving to perform information extraction or data mining for example. The rules obtained by such learning are then used as inputs to a further analysis. As a consequence of the inputs to this analysis having been learned, however, its conclusions can only be theoretically guaranteed to satisfy the same standard as the inputs---that of "PAC Semantics" (Valiant, 2000). In this talk, we explore the benefits of taking the problems of learning and logical reasoning together, capturing a relatively general family of such applications. Briefly, the benefit is that we can simultaneously (1) handle incomplete information, (2) utilize rules that are a reasonable but imperfect fit for the data, and (3) reach conclusions more efficiently than is achieved by known algorithms for reasoning from rules alone. Precisely, we consider a problem of testing the validity of a "query" formula (hypothesis) against incomplete data. We present algorithms for soundly verifying such queries that succeed whenever there exist learnable rules that suffice to complete a simple proof of the query, matching what can be achieved by such a two-stage learning and reasoning process.
Friday Oct. 11
2:00PM
WWH 1314
Zeev Dvir (Princeton)
Incidence theorems and their applications
Abstract: Incidence theorems describe the way lines, points and other geometric objects intersect each other. Basic questions such as `How many incidences can a set of lines have with a set of points?' or `What is the best way to pack a set of lines in different directions?' can lead to surprising applications in computational complexity and combinatorics. These include new constructions of seeded extractors and multi-source extractors, lower bounds for locally correctable codes and counting distinct distances (Erdos' distance problem). In this talk I will attempt to describe some of these results which demonstrate the expressive power of incidence geometry.
Abstract: The discrepancy of a matrix A is a classical quantity, defined to be the
minimum of the infinity norm of Ax when x ranges over all vectors with
coordinates in {-1, +1}. A more robust version of discrepancy is hereditary
discrepancy: the maximum discrepancy of all submatrices of A. Recently,
Nikolov, Talwar, and Zhang gave the first polylogarithmic approximaition to
hereditary discrepancy by relating it to the error of a differentially
private algorithm. In this work we give an approximation algorithm for
hereditary discrepancy that circumvents differential privacy and is based
entirely on tools from convex geometry. At the heart of our algorithm is
the construction of a simple geometric certificate of an upper bound on
hereditary discrepancy -- a "small" ellipsoid containing the columns of the
matrix A. The certificate can be used to compute a low value SDP solution
to a natural relaxation of discrepancy for any submatrix of A, and the
solution can then be rounded to a low discrepancy coloring using Bansal's
rounding algorithm from FOCS 2010. Our approach leads to an improved
approximation ratio of O(log^2 mn), and easily generalizes to a matrix-norm
version of discrepancy.
Joint work with Kunal Talwar.
Friday Sept. 27
2:00PM
WWH 1314
Clément Canonne (Columbia)
Testing probability distributions using conditional samples
Abstract: One of the most fundamental problem paradigms in statistics
is that of inferring some information about an unknown probability distribution D,
given access to independent samples drawn from it. More than a decade
ago, Batu et al. initiated the study of problems of this type from
within the framework of /property testing/, a setting where, given an
unknown "massive object" an algorithm can access only by making a small
(sublinear) number of "local inspections" (queries) of the object, the
goal is to determine whether the object has a particular property. The
algorithm must accept if the object has the desired property, and reject
if the object is far from every object with the property. In
distribution property testing, the "massive object" is an unknown
probability distribution D over an N-element set, and the tester
accesses the distribution by drawing independent samples from it.
We study a recently introduced framework for such a task, by considering
distribution testing algorithms that have access to a /conditional
sampling oracle/. This is an oracle that takes as input a subset S of
the domain [N]={1,...,N} of the unknown probability distribution D and
returns a draw from the conditional probability distribution D
restricted to S. This new model allows considerable flexibility in the
design of distribution testing algorithms; in particular, testing
algorithms in this model can be adaptive.
We study a wide range of natural distribution testing problems in this
new framework and some of its variants, giving both upper and lower
bounds on query complexity. These problems include testing whether D is
the uniform distribution U; testing whether D = D* for an explicitly
provided D*; testing whether two unknown distributions D1 and D2 are
equivalent; and estimating the variation distance between D and the
uniform distribution.
At a high level our main finding is that the new "conditional sampling"
framework we consider is a powerful one: while all the problems
mentioned above have Omega(N^(1/2)) sample complexity in the standard
model (and in some cases the complexity must be almost linear in N), we
give poly(log N, 1/eps)-query algorithms (and in some cases
poly(1/eps)-query algorithms independent of N) for all these problems in
our conditional sampling setting.
Joint work with Dana Ron (Tel-Aviv University) and Rocco Servedio
(Columbia University).
Thursday Sept. 19
2:00PM
WWH 1314
Ben Lee Volk (Technion)
Boolean functions with small spectral norm, sparse boolean functions and
decision tree complexity
Abstract: In this talk we show several results regarding Boolean functions with small
spectral norm (the spectral norm of f is the sum of the absolute value of
its Fourier coefficients.)
Specifically, we prove the following results for Boolean functions on n
variables with spectral norm A.
1. There is a subspace V of co-dimension at most A^2 such that f|_V is
constant.
2. f can be computed by a parity decision tree of size 2^{A^2}n^{2A}. (a
parity decision tree is a decision tree whose nodes are labeled with
arbitrary linear functions.)
3. If in addition f has at most s nonzero Fourier coefficients, then f can
be computed by a parity decision tree of depth A^2.log s.
4. For a "small" A, f can be epsilon-approximated by a parity decision tree
of depth O(log(1/eps)). Furthermore, this tree can be efficiently learned
using membership queries.
All the results above also hold (with a slight change in parameters) to
functions over (Z_p)^n.
Based on joint work with Amir Shpilka and Avishay Tal
Thursday Sept. 5
1:00PM
WWH 1302
Ronald de Wolf (CWI)
Approximate degree bounds for all and almost all Boolean functions
Abstract: We present two recent results about the degree of real multivariate polynomials that approximate Boolean functions up to small constant error. (1) All Boolean functions that depend on n variables need degree Omega(log n / loglog n), and there exist functions where this bound is tight. This contrasts with the case of polynomials that exactly represent the function, where the correct degree bound is log n (Nisan and Szegedy'92). (2) O'Donnell and Servedio'03 showed that almost all n-bit Boolean functions have approximate degree roughly n/2. We show that if we approximate a function with a sum of squares of polynomials, then for almost all n-bit Boolean functions the polynomials that are squared already need degree roughly n/2. These two results have applications to and are motivated by quantum complexity in the query model, but this talk will mostly be about analysis of Boolean functions, not quantum computing. The first result is joint with Andris Ambainis (Complexity'13), the second is joint with Andris Ambainis, Arturs Backurs, and Juris Smotrovs (STACS'13).
If you would like to present something, please send an email to: jop (dot) briet (at) cims (dot) nyu (dot) edu
To subscribe to the mailing list, see: www.cs.nyu.edu/mailman/listinfo/cs_theory_seminar/