Usual coordinates:
Friday, 2:00PM
Room 1314
Warren Weaver Hall
251 Mercer Street
Winter 2014 Schedule
Friday May 2
9:45AM
WWH 1314
Swastik Kopparty (Rutgers)
Constant-rate PCPs with sublinear query complexity
Abstract:
The PCP theorem says that every NP-proof can be encoded to another proof, namely, a
probabilistically checkable proof (PCP), which can be tested by a verifier that queries only a
small part of the PCP. A natural question is how large is the blow-up incurred by this encoding,
i.e., how long is the PCP compared to the original NP-proof. The state-of-the-art results show
that one can encode proofs for instances of size n by PCPs of length (n poly log n) that can be
verified using a constant number of queries. In this work, we show that if the query complexity
is relaxed to n^epsilon, then one can construct PCPs of length O(n) for circuit-SAT, and PCPs of
length O(n log n) for all of NP. These PCPs have perfect completeness and constant soundness.
This is the first constant-rate PCP construction with constant soundness and nontrivial
query complexity (o(n)).
Our proof relies on replacing the use of low-degree polynomials in PCP constructions with
polynomial-like objects over a constant sized field: algebraic functions over an algebraic curve, also known as
algebraic-geometric codes, or AG codes. In particular, we show that the automorphisms of an AG code
can be used to simulate the role of affine transformations, which are crucial in earlier
high-rate algebraic PCP constructions. This allows us to use any
asymptotically good family of transitive AG codes over a constant-sized alphabet -- like the
family of AG codes presented by Stichtenoth in [Trans. Information Theory 2006] and in an
appendix to this work -- to construct constant-rate PCPs with polynomially small query complexity.
Joint work with Eli Ben-Sasson, Yohay Kaplan and Or Meir, with an appendix by Henning
Stichtenoth.
10:00 - 10:55 Dr. Vahab Mirrokni (Google Research)
Ad Allocation: Online Problems and Mechanism Design
10:55 - 11:05 Short break
11:05 - 12:00 Dr. Jennifer Chayes (Microsoft Research)
The Power of Locality for Network Algorithms
12:00 - 2:00 Lunch break
2:00 - 2:55 Prof. Venkatesan Guruswami (CMU)
Polar codes: Reliable communication with
complexity scaling polynomially in the gap to Shannon capacity
2:55 - 3:15 Coffee break
3:15 - 4:10 Prof. Adam Smith (Penn State)
Private Analysis of Graphs
Friday April 18
2:00PM
WWH 805
Nir Ailon (Technion)
Lower Bound for Fourier Transform in the Well-Conditioned Linear Model
(Or: If You Want a Faster Fourier Transform You'd Need to Sacrifice
Accuracy)
Abstract:
The Fourier Transform is one of the most important linear
transformations used in science and technology. Cooley and Tukey's Fast
Fourier Transform (FFT) from 1964 is a method for computing this
transformation in time O(n\log n). In spite of its importance, no
nontrivial (super-linear) lower bounds were known without making very
strong assumptions. Morgenstern's result from 1974 provides an
$\Omega(n\log n)$ lower bound for the *unnormalized *Fourier Transform
(of determinant n^{n/2}), assuming only numbers with bounded constant
modulus are used. [Ailon 2013] shows an \Omega(n\log n) bound for the
*normalized *Fourier Transform (of determinant 1) assuming only unitary
operations on two coordinates are allowed.
In this work we show that, as long as the computation is well
conditioned, *any scaling* of the Fourier transform requires
\Omega(R^{-1}n \log n) operations, where R is the condition number.
This means that, on a given machine, a faster Fourier transform must be
less accurate.
Friday April 11
2:00PM
WWH 1314
Andris Ambainis (University of Latvia and IAS)
On the power of exact quantum algorithms
Abstract:
A quantum algorithm is exact if, on any input data, it outputs the correct answer with certainty (probability 1) - in contrast to the usual bounded error model in which a quantum algorithm is allowed to output an incorrect answer with a small probabilities. Many quantum speedups are known for the bounded error model but coming up with exact quantum algorithms has been a much more difficult task.
We study this problem in the setting of quantum query complexity (which is a quantum generalization of decision tree model of computation). Until recently, the biggest advantage achieved by an exact quantum algorithm (for computing total Boolean functions) has been just a factor of 2: a function that requires N queries classically but can be computed with N/2 queries by an exact quantum algorithm.
We present the first example of a total Boolean function for which exact quantum algorithms have a superlinear advantage: O(N^{0.8675...}) queries for an exact quantum algorithm vs. N queries for classical algorithms.
Two polynomials $f, g \in \F[x_1, \ldots, x_n]$ are called {\em shift-equivalent} if there exists a vector
$(a_1, \ldots, a_n) \in {\F}^n$ such that the polynomial identity $f(x_1+a_1, \ldots, x_n+a_n) \equiv g(x_1,\ldots,x_n)$
holds. Our main result is a new randomized algorithm that tests whether two given polynomials are shift equivalent.
Our algorithm runs in time polynomial in the circuit size of the polynomials, to which it is given black box access. This
complements a previous work of Grigoriev (Theoretical Computer Science, 1997) who gave a deterministic algorithm running in time $n^{O(d)}$ for degree $d$ polynomials.
Our algorithm uses randomness only to solve instances of the Polynomial Identity Testing (PIT) problem. Hence, if one
could de-randomize PIT (a long-standing open problem in complexity) a de-randomization of our algorithm would follow.
This establishes an equivalence between de-randomizing shift-equivalence testing and de-randomizing PIT (both in the black-box and the white-box setting). For certain restricted models, such as Read Once Branching Programs, we
already obtain a deterministic algorithm using existing PIT results.
Joint work with Zeev Dvir and Amir Shpilka.
Polynomial Factorization and Polynomial Identity Testing
Swastik Kopparty (Rutgers)
I will talk about a recent result, joint with Shubhangi Saraf and Amir Shpilka,
showing that deterministic polynomial identity testing
algorithms yield deterministic multivariate polynomial factorization algorithms.
The talk will include a review of the amazing randomized multivariate polynomial factorization
algorithms of Kaltofen and Kaltofen-Trager, and also other algebraic
gems such as bivariate polynomial factorization algorithms and the Hilbert Irreducibility Theorem.
Survey on recent progress on lower bounds for arithmetic circuits
Shubhangi Saraf (Rutgers)
In the last few years there have been several exciting results related to depth reduction of arithmetic circuits and strong lower bounds for bounded depth arithmetic circuits. I will survey these results and highlight some of the main challenges and open directions in this area.
Lower bounds for homogeneous depth 4 arithmetic circuits
Mrinal Kumar (Rutgers)
In this talk, we will prove super-polynomial lower bounds for the class of homogeneous depth 4 arithmetic circuits.
This result builds upon and extends a long line of recent works which prove superpolynomial (and in some cases exponential) lower bounds for various restricted classes of homogeneous depth four circuits. The interest in homogeneous depth 4 circuits is motivated by the depth reduction results of Agrawal-Vinay, Koiran and Tavenas which show that strong enough lower bounds for homogeneous depth four circuits would suffice in order to separate VNP from VP.
The main ingredient in our proof is a variant of the notion of shifted partial derivatives as the complexity measure. This measure allows us to first prove strong lower bounds for homogeneous depth 4 circuits where all the monomials computed at the bottom layer have "bounded support". This lower bound combined with a careful "random restriction" procedure (that transforms general depth 4 homogeneous circuits to depth 4 circuits with bounded support) gives us our final result.
Joint work with Shubhangi Saraf.
Thursday Jan. 30
2:00PM
WWH 1314
Shubhangi Saraf (Rutgers)
Incidence geometry over the reals and locally correctable codes
Abstract: The classical Sylvester-Gallai Theorem states the following: given a finite set of points in the Euclidean plane, if the line through every pair of points passes through a third point, then all the points must be collinear. At a high level the result shows that many ``local" linear dependencies implies a ``global" bound on the dimension of the entire set of points. This result has been well studied in mathematics. In recent years variants of the theorem have also found applications to several structural results in theoretical computer science.
In this talk I will describe some high dimensional and quantitative extensions to the Sylvester-Gallai theorem - and some of the applications and connections in theoretical computer science to locally correctable codes. In particular I will describe a recent result showing that 3-query linear locally correctable codes over the Reals of dimension d require block length n > d^{2+\eps} for some positive \eps > 0. This improves the known quadratic lower bounds which hold for any linear code. Our proof uses a powerful result by Barthe from convex geometry to derive a structure theorem on the decoding query triples. This structure supports stronger random restriction arguments, and hence stronger lower bounds than previously available.
Based on joint work with Zeev Dvir and Avi Wigderson
If you would like to present something, please send an email to: jop (dot) briet (at) cims (dot) nyu (dot) edu
To subscribe to the mailing list, see: www.cs.nyu.edu/mailman/listinfo/cs_theory_seminar/