Theory Seminar
Usual coordinates: Thursday, 11AM Room 1314 Warren Weaver Hall 251 Mercer Street
Calendar
Upcoming talks
Past talks
Thursday January 24 Time : 10:45AM Room 1314, Warren Weaver Hall |
Mika Göös (IAS)
Title: Adventures in Monotone Complexity and TFNP
Abstract: *Separations:* We introduce a monotone variant of XOR-SAT and show it has exponential monotone circuit complexity. Since XOR-SAT is in NC^2, this improves qualitatively on the monotone vs. non-monotone separation of Tardos (1988). We also show that monotone span programs over R can be exponentially more powerful than over finite fields. These results can be interpreted as separating subclasses of TFNP in communication complexity. *Characterizations:* We show that the communication (resp. query) analogue of PPA (subclass of TFNP) captures span programs over F_2 (resp. Nullstellensatz degree over F_2). Previously, it was known that communication FP captures formulas (Karchmer--Wigderson, 1988) and that communication PLS captures circuits (Razborov, 1995). Joint work with Pritish Kamath, Robert Robere, and Dmitry Sokolov. |
Thursday March 14 Time : 11AM Room 1314, Warren Weaver Hall |
Mert Saglam (University of Washington)
Title: Near log-convexity of heat and the k-Hamming distance problem
Abstract: We answer a 1982 conjecture of Erdős and Simonovits about the growth of number ofk-walks in a graph, which incidentally was studied even earlier by Blakley and and Dixon in 1966. We prove this conjecture in a more general setup than the earlier treatment, furthermore, through a refinement and strengthening of this inequality, we resolve two related open questions in complexity theory: the communication complexity of thek-Hamming distance isΩ(k log k)and that consequently any property tester fork-linearity requires Ω(k log k)queries. |
Thursday March 21 Time : 11AM Room 1314, Warren Weaver Hall |
Sahil Singla(Princeton)
Title:The Byzantine Secretary Problem
Abstract: In the classical secretary problem, a sequence of n elements arrive in a uniformly random order. The goal is to maximize the probability of selecting the largest element (or to maximize the expected value of the selected item). This model captures applications like online auctions, where we want to select the highest bidder. In many such applications, however, one may expect a few outlier arrivals that are adversarially placed in the arrival sequence. Can we still select a large element with good probability? Dynkin’s popular 1/e-secretary algorithm is sensitive to even a single adversarial arrival: if the adversary gives one large bid at the beginning of the stream, the algorithm does not select any element at all. In this work we introduce the Byzantine Secretary problem where we have two kinds of elements: green (good) and red (bad). The green elements arrive uniformly at random. The reds arrive adversarially. The goal is to find a large green element. It is easy to see that selecting the largest green element is not possible even when a small fraction of the arrival is red, i.e., we cannot do much better than random guessing. Hence we introduce the second-max benchmark, where the goal is to select the second-largest green element or better. This dramatically improves our results. We study both the probability-maximization and the value-maximization settings. For probability-maximization, we show the existence of a good randomized algorithm, using the minimax principle. Specifically, we give an algorithm for the known distribution case, based on trying to guess the second-max in hindsight, and using this estimate as a good guess for the future. For value-maximization, we give an explicit poly log^∗ n competitive algorithm, using a multi-layered bucketing scheme that adaptively refines our estimates of second-max as we see more elements. For the multiple secretary problem, where we can pick up to r secretaries, we show constant competitiveness as long as r is large enough. For this, we give an adaptive thresholding algorithm that raises and lowers thresholds depending on the quality and the quantity of recently selected elements.
|
Thursday April 11 Time : 11AM Room 1314, Warren Weaver Hall |
Li-Yang Tan (Stanford University)
Title: A high-dimensional Littlewood--Offord inequality
Abstract: We prove a new Littlewood--Offord-type anticoncentration inequality for m-facet polytopes, a high-dimensional generalization of the classic Littlewood--Offord theorem. Our proof draws on and extends techniques from Kane's bound on the boolean average sensitivity of m-facet polytopes. Joint work with Ryan O'Donnell and Rocco Servedio; manuscript available at https://arxiv.org/abs/1808.04035 |
Thursday May 2 Time : 11AM Room 1314, Warren Weaver Hall |
Noah Stephens-Davidowitz (MIT)
Title: SETH-hardness of coding problems
Abstract: We show that, assuming a common conjecture in complexity theory, there are "no non-trivial algorithms" for the two most important problems in coding theory: the Nearest Codeword Problem (NCP) and the Minimum Distance Problem (MDP). Specifically, for any constant \eps > 0, there is no N^{1-\eps}-time algorithm for codes with N codewords. In fact, the NCP result even holds for a family of codes with a single code of each cardinality, and our hardness result therefore also applies to the preprocessing variant of the problem. These results are inspired by earlier work showing similar results for the analogous lattice problems (joint works with three other NYU alums: Huck Bennett and Sasha Golovnev and with Divesh Aggarwal), but the proofs for coding problems are far simpler. As in those works, we also prove weaker hardness results for approximate versions of these problems (showing that there is no N^{o(1)}-time algorithm in this case). Based on joint work with Vinod Vaikuntanathan. |
Thursday May 16 Time : 11AM Room 1314, Warren Weaver Hall |
Omri Weinstein (Columbia University)
Title: Data Structure Lower Bounds imply Rigidity
Abstract: I will talk about new connections between arithmetic data structures, circuit lower bounds and pseudorandomness. As the main result, we show that static data structure lower bounds in the group (linear) model imply semi-explicit lower bounds on matrix rigidity. In particular, we prove that an explicit lower bound of t ω(log^2 n) on the cell-probe complexity of linear data structures in the group model, even against arbitrarily small linear space (s = (1+ε)n), would already imply a semi-explicit (P^NP) construction of rigid matrices with significantly better parameters than the current state of art (Alon, Panigrahy, and Yekhanin, 2009). Our result further asserts that polynomial (t n^δ) data structure lower bounds against near-maximal space, would imply super-linear circuit lower bounds for log-depth linear circuits (which would close a four-decade open question). In the succinct space regime (s = n+o(n)), we show that any improvement on current cell-probe lower bounds in the linear model would also imply new rigidity bounds. Our main result relies on a new connection between the inner and outer dimensions of a matrix (Paturi and Pudlak, 2006), and on a new worst-to-average case reduction for rigidity, which is of independent interest. Based mostly on joint work with Zeev Dvir (Princeton) and Alexander Golovnev (Harvard). |
Tuesday May 21 Time : 11AM Room 705, Warren Weaver Hall |
Jeroen Zuiddam (IAS)
Title: The asymptotic spectrum of graphs: duality for Shannon capacity
Abstract: We give a dual description of the Shannon capacity of graphs. The Shannon capacity of a graph is the rate of growth of the independence number under taking the strong power, or in different language, it is the maximum rate at which information can be transmitted over a noisy communication channel. Our dual description gives Shannon capacity as a minimization over the "asymptotic spectrum of graphs", which as a consequence unifies previous results and naturally gives rise to new questions. Besides a gentle introduction to the asymptotic spectrum of graphs we will discuss, if time permits, Strassen's general theory of "asymptotic spectra" and the asymptotic spectrum of tensors. |
Fall 2018
Thursday September 27 Time : 11AM Room 505, Warren Weaver Hall |
Dor Minzer (Tel Aviv University)
Title: 2-to-2 Games is NP-hard
Abstract: The PCP theorem characterizes the computational class NP, so as to facilitate proofs that approximation problems are NP-hard. Over the last two and a half decades, the theory of PCP's has been vital in the proof of numerous NP-hardness results for approximation problems. Despite considerable efforts, however, several tight hardness of approximation results remain elusive via the PCP theorem and the body of accompanying set of techniques. In many such cases, the Unique-Games Conjecture, posed by Khot in 2002, can still be used to obtain tight hardness results. Since its proposal, the Unique-Games Conjecture has been a prominent open problem in theoretical computer science, inspiring many connections between Complexity Theory, Geometry, Analysis and Probability. This conjecture is stronger than the more standard conjecture that P\neq NP, and its validity has been a matter of debate. One would therefore much rather prove the gold-standard of computational hardness, that of NP-hardness. The main subject of the talk is the 2-to-2-Games Conjecture, a closely related conjecture also due to Khot, stating that 2-to-2 Games is NP-hard. A recent line of study pursued a new attack on the 2-to-2 Games Conjecture (albeit with imperfect completeness). The approach is based on a mathematical object called the Grassmann Graph, and relies on the study of edge-expansion in this graph. More specifically, the study of sets of vertices in the Grassmann Graph that contain even a tiny fraction of the edges incident to the set. A characterization of such sets was recently accomplished, completing the proof of the 2-to-2 Games Conjecture (with imperfect completeness). The talk discusses the 2-to-2 Games Conjecture, its implications and the line of study. Based on joint works with Irit Dinur, Subhash Khot, Guy Kindler and Muli Safra. |
Thursday December 6 Time : 10:45AM Room 1314, Warren Weaver Hall |
Cole Franks (Rutgers University)
Title: On the discrepancy of random matrices with many columns
Abstract: The discrepancy of a red-blue vertex coloring of a hypergraph is the maximum imbalance between the number of red and blue vertices in any edge. The discrepancy of a hypergraph is the least discrepancy of any red-blue coloring. A major open problem is the Beck-Fíala conjecture, which asserts that the discrepancy of every t-regular hypergraph is O(\sqrt{t}). I will discuss some recent joint work with Michael Saks on an application of Fourier analysis to the discrepancy of random regular hypergraphs. |
Spring 2018
Thursday Feb 15 Time : 2PM Room 3-50, Kaufman Management Center |
Ola Svensson (EPFL)
Title: A Constant-factor Approximation Algorithm for the Asymmetric Traveling Salesman Problem
Abstract: We give a constant-factor approximation algorithm for the asymmetric traveling salesman problem. Our approximation guarantee is analyzed with respect to the standard LP relaxation, and thus our result confirms the conjectured constant integrality gap of that relaxation. Our techniques build upon the constant-factor approximation algorithm for the special case of node-weighted metrics. Specifically, we give a generic reduction to structured instances that resemble but are more general than those arising from node-weighted metrics. For those instances, we then solve Local-Connectivity ATSP, a problem known to be equivalent (in terms of constant-factor approximation) to the asymmetric traveling salesman problem. This is joint work with Jakub Tarnawski and László Végh. |
Friday, March 9 Time : 2PM Room 1314, Warren Weaver Hall |
Euiwoong Lee(NYU)
Title: FPT-Approximation Algorithms for k-Cut and k-Treewidth Deletion
Abstract: Approximation algorithms and fixed-parameter tractable (FPT) algorithms have been two major ways to deal with NP-hardness of combinatorial optimization problems. The notion of FPT approximation can be naturally defined, and is actively studied recently. I will talk about my recent results on FPT approximation algorithms. - k-cut is the problem where we want to remove the minimum number of edges such that the graph has at least k connected components. We give a (2 – epsilon)-FPT approximation algorithm for some epsilon > 0, improving upon a (non-FPT) 2-approximation. - Treewidth k-deletion is the problem where we want to remove the minimum number of vertices such that the graph has treewidth at most k. We give an O(log k)-FPT approximation algorithm, which implies the same approximation factor for many other problems. This improves the previous best algorithm that gave f(k)-approximation for some (implicit) function f. Based on joint works with Anupam Gupta, Jason Li, Pasin Manurangsi, Michal Wlodarczyk. |
Friday, March 16 Time : 2PM Room 1314, Warren Weaver Hall |
Shay Moran(Princeton University)
Title: On the expressiveness of comparison queries
Abstract: Comparisons are a classical and well studied algorithmic tool that is used in a variety of contexts and applications. I will present three examples that manifest the expressiveness of these queries in information theory, machine learning, and complexity theory. 20 (simple) questions. A basic combinatorial interpretation of Shannon's entropy function is via the "20 questions" game. This cooperative game is played by two players, Alice and Bob: Alice picks a distribution π over the numbers {1,…,n}, and announces it to Bob. She then chooses a number x according to π , and Bob attempts to identify x using as few Yes/No queries as possible, on average. An optimal strategy for the "20 questions" game is given by a Huffman code for π: Bob's questions reveal the codeword for x bit by bit. This strategy finds x using fewer than H(π)+1 questions on average. However, the questions asked by Bob could be arbitrary. We investigate the following question: Are there restricted sets of questions that match the performance of Huffman codes? We show that for every distribution π, Bob has a strategy that uses only questions of the form "x<c?" and "x=c?", and uncovers x using at most H(π)+1 questions on average, matching the performance of Huffman codes in this sense. Active classification with comparison queries. This part concerns an extension of active learning in which the learning algorithm may ask the annotator to compare the distances of two examples from the boundary of their label-class. For example, in a recommendation system application (say for restaurants), the annotator may be asked whether she liked or disliked a specific restaurant (a label query); or which one of two restaurants did she like more (a comparison query). We focus on the class of half-spaces, and show that under natural assumptions, such as large margin or bounded bit-description of the input examples, it is possible to reveal all the labels of a sample of size n using approximately O(logn) queries. Nearly optimal linear decision trees for k-SUM and related problems. We use the above active classification framework to construct near optimal linear decision trees for a variety of decision problems in combinatorics and discrete geometry. For example, for any constant k, we construct linear decision trees that solve the k-SUM problem on n elements using O(n log n) linear queries. Moreover, the queries we use are comparison queries, which compare the sums of two k-subsets; when viewed as linear queries, comparison queries are 2k-sparse and have only {−1,+1} coefficients. We give similar constructions for sorting sumsets A+B and for solving the SUBSET-SUM problem, both with optimal number of queries, up to poly-logarithmic terms. Based on joint works with Yuval Dagan, Yuval Filmus, Ariel Gabizon, Daniel Kane, Shachar Lovett, and Jiapeng Zhang. |
Tuesday March 27 Time : 11AM Room 1314, Warren Weaver Hall |
Vijay Vazirani (UC Irvine)
Title: Planar Graph Perfect Matching is in NC
Abstract: Is matching in NC, i.e., is there a deterministic fast parallel algorithm for it? This has been an outstanding open question in TCS for over three decades, ever since the discovery of Random NC matching algorithms. Within this question, the case of planar graphs has remained an enigma: On the one hand, counting the number of perfect matchings is far harder than finding one (the former is #P-complete and the latter is in P), and on the other, for planar graphs, counting has long been known to be in NC whereas finding one has resisted a solution! The case of bipartite planar graphs was solved by Miller and Naor in 1989 via a flow-based algorithm. In 2000, Mahajan and Varadarajan gave an elegant way of using counting matchings to finding one, hence giving a different NC algorithm. However, non-bipartite planar graphs still didn't yield: the stumbling block being odd tight cuts. Interestingly enough, these are also a key to the solution: a balanced odd tight cut leads to a straight-forward divide and conquer NC algorithm. The remaining task is to find such a cut in NC. This requires several algorithmic ideas, such as finding a point in the interior of the minimum weight face of the perfect matching polytope and uncrossing odd tight cuts. Paper available at: https://arxiv.org/pdf/1709.07822.pdfJoint work with Nima Anari. |
Friday, May 4 Time : 2PM Room 1314, Warren Weaver Hall |
Sam Hopkins (Cornell University)
Title: Spectral Algorithms and SoS Proofs for Planted Problems
Abstract: High-dimensional statistical inference is a central concern of modern computer science, but theoretical understanding of which inference problems are computationally tractable and which are not lags far behind both classical worst-case algorithms/computational complexity and machine-learning practice. This talk will discuss recent efforts towards a more unified theory of algorithmic statistical inference based on meta-algorithms and the Sum of Squares (SoS) method. In particular, we will discuss the quest for problem-and algorithm-independent criteria for algorithmic intractability in the statistical inference setting. The technical portion of the talk will describe (and prove, time-permitting) a recent theorem along these lines characterizing SoS algorithms for some particularly special inference problems (often called planted problems) in terms of simpler spectral algorithms. Based primarily on joint work with Pravesh Kothari, Aaron Potechin, Prasad Raghavendra, Tselil Schramm, and David Steurer, from FOCS 2017. |
Friday, May 11 Time : 2PM Room 1314, Warren Weaver Hall |
Pravesh Kothari(Princeton)
Title: The Sum-of-Squares Approach to Unsupervised Machine Learning
Abstract: Influential works over the past decade have led to an essentially complete understanding of basic problems in unsupervised machine learning such as compressed sensing and matrix completion. The efficient algorithms that achieve the strongest possible guarantees for such problems are based on convex relaxations such as linear and semi-definite programming. Such a detailed understanding, however, has not been possible for other fundamental problems such as clustering. This is in part because coming up with and analyzing the guarantees of the relevant convex relaxation has relied on sophisticated, creative ``leaps" that are problem-specific (such as guessing and analyzing a clever ``dual" solution). Applying such a blueprint to other settings of interest is thus not immediately clear. The goal of this talk is to illustrate the ``sum-of-squares method": a recently developed approach that gives a natural and intuitive ``blueprint" for both coming up with and analyzing the ``right" convex relaxation for a number of problems in unsupervised machine learning, including the ones mentioned above. Key to this paradigm is a surprising reduction of efficient algorithm design to finding "simple" proofs of ``identifiability" - a proof that uses the given data to _verify_ that a purported solution is indeed the correct one. In recent years, this single algorithmic paradigm has yielded sharp guarantees for a number of basic problems in unsupervised learning. In this talk, I will explain this approach by means of two examples for which we were able to make substantial progress using the sum-of-squares method: 1) clustering mixtures of spherical Gaussians and 2) estimating moments of distributions in the presence of adversarial outliers. Based on joint works with Jacob Steinhardt (Stanford) and David Steurer (ETH Zurich). |