Theory Seminar
Usual coordinates:
Thursday, 11AM
Room 1314
Warren Weaver Hall
251 Mercer Street
Calendar
Upcoming talks
Thursday May 16 Time : 11AM Room 1314, Warren Weaver Hall 
Omri Weinstein (Columbia University)
Title: Data Structure Lower Bounds imply Rigidity
Abstract: I will talk about new connections between arithmetic data structures, circuit lower bounds and pseudorandomness. As the main result, we show that static data structure lower bounds in the group (linear) model imply semiexplicit lower bounds on matrix rigidity. In particular, we prove that an explicit lower bound of t ω(log^2 n) on the cellprobe complexity of linear data structures in the group model, even against arbitrarily small linear space (s = (1+ε)n), would already imply a semiexplicit (P^NP) construction of rigid matrices with significantly better parameters than the current state of art (Alon, Panigrahy, and Yekhanin, 2009). Our result further asserts that polynomial (t n^δ) data structure lower bounds against nearmaximal space, would imply superlinear circuit lower bounds for logdepth linear circuits (which would close a fourdecade open question). In the succinct space regime (s = n+o(n)), we show that any improvement on current cellprobe lower bounds in the linear model would also imply new rigidity bounds. Our main result relies on a new connection between the inner and outer dimensions of a matrix (Paturi and Pudlak, 2006), and on a new worsttoaverage case reduction for rigidity, which is of independent interest. Based mostly on joint work with Zeev Dvir (Princeton) and Alexander Golovnev (Harvard). 
Past talks
Thursday January 24 Time : 10:45AM Room 1314, Warren Weaver Hall 
Mika Göös (IAS)
Title: Adventures in Monotone Complexity and TFNP
Abstract: *Separations:* We introduce a monotone variant of XORSAT and show it has exponential monotone circuit complexity. Since XORSAT is in NC^2, this improves qualitatively on the monotone vs. nonmonotone separation of Tardos (1988). We also show that monotone span programs over R can be exponentially more powerful than over finite fields. These results can be interpreted as separating subclasses of TFNP in communication complexity. *Characterizations:* We show that the communication (resp. query) analogue of PPA (subclass of TFNP) captures span programs over F_2 (resp. Nullstellensatz degree over F_2). Previously, it was known that communication FP captures formulas (KarchmerWigderson, 1988) and that communication PLS captures circuits (Razborov, 1995). Joint work with Pritish Kamath, Robert Robere, and Dmitry Sokolov. 
Thursday March 14 Time : 11AM Room 1314, Warren Weaver Hall 
Mert Saglam (University of Washington)
Title: Near logconvexity of heat and the kHamming distance problem
Abstract: We answer a 1982 conjecture of Erdős and Simonovits about the growth of number ofkwalks in a graph, which incidentally was studied even earlier by Blakley and and Dixon in 1966. We prove this conjecture in a more general setup than the earlier treatment, furthermore, through a refinement and strengthening of this inequality, we resolve two related open questions in complexity theory: the communication complexity of thekHamming distance isΩ(k log k)and that consequently any property tester forklinearity requires Ω(k log k)queries. 
Thursday March 21 Time : 11AM Room 1314, Warren Weaver Hall 
Sahil Singla(Princeton)
Title:The Byzantine Secretary Problem
Abstract: In the classical secretary problem, a sequence of n elements arrive in a uniformly random order. The goal is to maximize the probability of selecting the largest element (or to maximize the expected value of the selected item). This model captures applications like online auctions, where we want to select the highest bidder. In many such applications, however, one may expect a few outlier arrivals that are adversarially placed in the arrival sequence. Can we still select a large element with good probability? Dynkin’s popular 1/esecretary algorithm is sensitive to even a single adversarial arrival: if the adversary gives one large bid at the beginning of the stream, the algorithm does not select any element at all. In this work we introduce the Byzantine Secretary problem where we have two kinds of elements: green (good) and red (bad). The green elements arrive uniformly at random. The reds arrive adversarially. The goal is to find a large green element. It is easy to see that selecting the largest green element is not possible even when a small fraction of the arrival is red, i.e., we cannot do much better than random guessing. Hence we introduce the secondmax benchmark, where the goal is to select the secondlargest green element or better. This dramatically improves our results. We study both the probabilitymaximization and the valuemaximization settings. For probabilitymaximization, we show the existence of a good randomized algorithm, using the minimax principle. Specifically, we give an algorithm for the known distribution case, based on trying to guess the secondmax in hindsight, and using this estimate as a good guess for the future. For valuemaximization, we give an explicit poly log^∗ n competitive algorithm, using a multilayered bucketing scheme that adaptively refines our estimates of secondmax as we see more elements. For the multiple secretary problem, where we can pick up to r secretaries, we show constant competitiveness as long as r is large enough. For this, we give an adaptive thresholding algorithm that raises and lowers thresholds depending on the quality and the quantity of recently selected elements.

Thursday April 11 Time : 11AM Room 1314, Warren Weaver Hall 
LiYang Tan (Stanford University)
Title: A highdimensional LittlewoodOfford inequality
Abstract: We prove a new LittlewoodOffordtype anticoncentration inequality for mfacet polytopes, a highdimensional generalization of the classic LittlewoodOfford theorem. Our proof draws on and extends techniques from Kane's bound on the boolean average sensitivity of mfacet polytopes. Joint work with Ryan O'Donnell and Rocco Servedio; manuscript available at https://arxiv.org/abs/1808.04035 
Thursday May 2 Time : 11AM Room 1314, Warren Weaver Hall 
Noah StephensDavidowitz (MIT)
Title: SETHhardness of coding problems
Abstract: We show that, assuming a common conjecture in complexity theory, there are "no nontrivial algorithms" for the two most important problems in coding theory: the Nearest Codeword Problem (NCP) and the Minimum Distance Problem (MDP). Specifically, for any constant \eps > 0, there is no N^{1\eps}time algorithm for codes with N codewords. In fact, the NCP result even holds for a family of codes with a single code of each cardinality, and our hardness result therefore also applies to the preprocessing variant of the problem. These results are inspired by earlier work showing similar results for the analogous lattice problems (joint works with three other NYU alums: Huck Bennett and Sasha Golovnev and with Divesh Aggarwal), but the proofs for coding problems are far simpler. As in those works, we also prove weaker hardness results for approximate versions of these problems (showing that there is no N^{o(1)}time algorithm in this case). Based on joint work with Vinod Vaikuntanathan. 
Fall 2018
Thursday September 27 Time : 11AM Room 505, Warren Weaver Hall 
Dor Minzer (Tel Aviv University)
Title: 2to2 Games is NPhard
Abstract: The PCP theorem characterizes the computational class NP, so as to facilitate proofs that approximation problems are NPhard. Over the last two and a half decades, the theory of PCP's has been vital in the proof of numerous NPhardness results for approximation problems. Despite considerable efforts, however, several tight hardness of approximation results remain elusive via the PCP theorem and the body of accompanying set of techniques. In many such cases, the UniqueGames Conjecture, posed by Khot in 2002, can still be used to obtain tight hardness results. Since its proposal, the UniqueGames Conjecture has been a prominent open problem in theoretical computer science, inspiring many connections between Complexity Theory, Geometry, Analysis and Probability. This conjecture is stronger than the more standard conjecture that P\neq NP, and its validity has been a matter of debate. One would therefore much rather prove the goldstandard of computational hardness, that of NPhardness. The main subject of the talk is the 2to2Games Conjecture, a closely related conjecture also due to Khot, stating that 2to2 Games is NPhard. A recent line of study pursued a new attack on the 2to2 Games Conjecture (albeit with imperfect completeness). The approach is based on a mathematical object called the Grassmann Graph, and relies on the study of edgeexpansion in this graph. More specifically, the study of sets of vertices in the Grassmann Graph that contain even a tiny fraction of the edges incident to the set. A characterization of such sets was recently accomplished, completing the proof of the 2to2 Games Conjecture (with imperfect completeness). The talk discusses the 2to2 Games Conjecture, its implications and the line of study. Based on joint works with Irit Dinur, Subhash Khot, Guy Kindler and Muli Safra. 
Thursday December 6 Time : 10:45AM Room 1314, Warren Weaver Hall 
Cole Franks (Rutgers University)
Title: On the discrepancy of random matrices with many columns
Abstract: The discrepancy of a redblue vertex coloring of a hypergraph is the maximum imbalance between the number of red and blue vertices in any edge. The discrepancy of a hypergraph is the least discrepancy of any redblue coloring. A major open problem is the BeckFíala conjecture, which asserts that the discrepancy of every tregular hypergraph is O(\sqrt{t}). I will discuss some recent joint work with Michael Saks on an application of Fourier analysis to the discrepancy of random regular hypergraphs. 
Spring 2018
Thursday Feb 15 Time : 2PM Room 350, Kaufman Management Center 
Ola Svensson (EPFL)
Title: A Constantfactor Approximation Algorithm for the Asymmetric Traveling Salesman Problem
Abstract: We give a constantfactor approximation algorithm for the asymmetric traveling salesman problem. Our approximation guarantee is analyzed with respect to the standard LP relaxation, and thus our result confirms the conjectured constant integrality gap of that relaxation. Our techniques build upon the constantfactor approximation algorithm for the special case of nodeweighted metrics. Specifically, we give a generic reduction to structured instances that resemble but are more general than those arising from nodeweighted metrics. For those instances, we then solve LocalConnectivity ATSP, a problem known to be equivalent (in terms of constantfactor approximation) to the asymmetric traveling salesman problem. This is joint work with Jakub Tarnawski and László Végh. 
Friday, March 9 Time : 2PM Room 1314, Warren Weaver Hall 
Euiwoong Lee(NYU)
Title: FPTApproximation Algorithms for kCut and kTreewidth Deletion
Abstract: Approximation algorithms and fixedparameter tractable (FPT) algorithms have been two major ways to deal with NPhardness of combinatorial optimization problems. The notion of FPT approximation can be naturally defined, and is actively studied recently. I will talk about my recent results on FPT approximation algorithms.  kcut is the problem where we want to remove the minimum number of edges such that the graph has at least k connected components. We give a (2 – epsilon)FPT approximation algorithm for some epsilon > 0, improving upon a (nonFPT) 2approximation.  Treewidth kdeletion is the problem where we want to remove the minimum number of vertices such that the graph has treewidth at most k. We give an O(log k)FPT approximation algorithm, which implies the same approximation factor for many other problems. This improves the previous best algorithm that gave f(k)approximation for some (implicit) function f. Based on joint works with Anupam Gupta, Jason Li, Pasin Manurangsi, Michal Wlodarczyk. 
Friday, March 16 Time : 2PM Room 1314, Warren Weaver Hall 
Shay Moran(Princeton University)
Title: On the expressiveness of comparison queries
Abstract: Comparisons are a classical and well studied algorithmic tool that is used in a variety of contexts and applications. I will present three examples that manifest the expressiveness of these queries in information theory, machine learning, and complexity theory. 20 (simple) questions. A basic combinatorial interpretation of Shannon's entropy function is via the "20 questions" game. This cooperative game is played by two players, Alice and Bob: Alice picks a distribution π over the numbers {1,…,n}, and announces it to Bob. She then chooses a number x according to π , and Bob attempts to identify x using as few Yes/No queries as possible, on average. An optimal strategy for the "20 questions" game is given by a Huffman code for π: Bob's questions reveal the codeword for x bit by bit. This strategy finds x using fewer than H(π)+1 questions on average. However, the questions asked by Bob could be arbitrary. We investigate the following question: Are there restricted sets of questions that match the performance of Huffman codes? We show that for every distribution π, Bob has a strategy that uses only questions of the form "x<c?" and "x=c?", and uncovers x using at most H(π)+1 questions on average, matching the performance of Huffman codes in this sense. Active classification with comparison queries. This part concerns an extension of active learning in which the learning algorithm may ask the annotator to compare the distances of two examples from the boundary of their labelclass. For example, in a recommendation system application (say for restaurants), the annotator may be asked whether she liked or disliked a specific restaurant (a label query); or which one of two restaurants did she like more (a comparison query). We focus on the class of halfspaces, and show that under natural assumptions, such as large margin or bounded bitdescription of the input examples, it is possible to reveal all the labels of a sample of size n using approximately O(logn) queries. Nearly optimal linear decision trees for kSUM and related problems. We use the above active classification framework to construct near optimal linear decision trees for a variety of decision problems in combinatorics and discrete geometry. For example, for any constant k, we construct linear decision trees that solve the kSUM problem on n elements using O(n log n) linear queries. Moreover, the queries we use are comparison queries, which compare the sums of two ksubsets; when viewed as linear queries, comparison queries are 2ksparse and have only {−1,+1} coefficients. We give similar constructions for sorting sumsets A+B and for solving the SUBSETSUM problem, both with optimal number of queries, up to polylogarithmic terms. Based on joint works with Yuval Dagan, Yuval Filmus, Ariel Gabizon, Daniel Kane, Shachar Lovett, and Jiapeng Zhang. 
Tuesday March 27 Time : 11AM Room 1314, Warren Weaver Hall 
Vijay Vazirani (UC Irvine)
Title: Planar Graph Perfect Matching is in NC
Abstract: Is matching in NC, i.e., is there a deterministic fast parallel algorithm for it? This has been an outstanding open question in TCS for over three decades, ever since the discovery of Random NC matching algorithms. Within this question, the case of planar graphs has remained an enigma: On the one hand, counting the number of perfect matchings is far harder than finding one (the former is #Pcomplete and the latter is in P), and on the other, for planar graphs, counting has long been known to be in NC whereas finding one has resisted a solution! The case of bipartite planar graphs was solved by Miller and Naor in 1989 via a flowbased algorithm. In 2000, Mahajan and Varadarajan gave an elegant way of using counting matchings to finding one, hence giving a different NC algorithm. However, nonbipartite planar graphs still didn't yield: the stumbling block being odd tight cuts. Interestingly enough, these are also a key to the solution: a balanced odd tight cut leads to a straightforward divide and conquer NC algorithm. The remaining task is to find such a cut in NC. This requires several algorithmic ideas, such as finding a point in the interior of the minimum weight face of the perfect matching polytope and uncrossing odd tight cuts. Paper available at: https://arxiv.org/pdf/1709.07822.pdfJoint work with Nima Anari. 
Friday, May 4 Time : 2PM Room 1314, Warren Weaver Hall 
Sam Hopkins (Cornell University)
Title: Spectral Algorithms and SoS Proofs for Planted Problems
Abstract: Highdimensional statistical inference is a central concern of modern computer science, but theoretical understanding of which inference problems are computationally tractable and which are not lags far behind both classical worstcase algorithms/computational complexity and machinelearning practice. This talk will discuss recent efforts towards a more unified theory of algorithmic statistical inference based on metaalgorithms and the Sum of Squares (SoS) method. In particular, we will discuss the quest for problemand algorithmindependent criteria for algorithmic intractability in the statistical inference setting. The technical portion of the talk will describe (and prove, timepermitting) a recent theorem along these lines characterizing SoS algorithms for some particularly special inference problems (often called planted problems) in terms of simpler spectral algorithms. Based primarily on joint work with Pravesh Kothari, Aaron Potechin, Prasad Raghavendra, Tselil Schramm, and David Steurer, from FOCS 2017. 
Friday, May 11 Time : 2PM Room 1314, Warren Weaver Hall 
Pravesh Kothari(Princeton)
Title: The SumofSquares Approach to Unsupervised Machine Learning
Abstract: Influential works over the past decade have led to an essentially complete understanding of basic problems in unsupervised machine learning such as compressed sensing and matrix completion. The efficient algorithms that achieve the strongest possible guarantees for such problems are based on convex relaxations such as linear and semidefinite programming. Such a detailed understanding, however, has not been possible for other fundamental problems such as clustering. This is in part because coming up with and analyzing the guarantees of the relevant convex relaxation has relied on sophisticated, creative ``leaps" that are problemspecific (such as guessing and analyzing a clever ``dual" solution). Applying such a blueprint to other settings of interest is thus not immediately clear. The goal of this talk is to illustrate the ``sumofsquares method": a recently developed approach that gives a natural and intuitive ``blueprint" for both coming up with and analyzing the ``right" convex relaxation for a number of problems in unsupervised machine learning, including the ones mentioned above. Key to this paradigm is a surprising reduction of efficient algorithm design to finding "simple" proofs of ``identifiability"  a proof that uses the given data to _verify_ that a purported solution is indeed the correct one. In recent years, this single algorithmic paradigm has yielded sharp guarantees for a number of basic problems in unsupervised learning. In this talk, I will explain this approach by means of two examples for which we were able to make substantial progress using the sumofsquares method: 1) clustering mixtures of spherical Gaussians and 2) estimating moments of distributions in the presence of adversarial outliers. Based on joint works with Jacob Steinhardt (Stanford) and David Steurer (ETH Zurich). 