Colloquium Details

New York Area Theory Day

Speaker: IBM/NYU/Columbia

Location: Schapiro (CEPSR) Building, Columbia University 412

Date: May 13, 2011, 9:30 a.m.

Host: Columbia University, New York

Synopsis:

The New York Area Theory Day is a semi-annual conference, aimed to bring together people in the New York Metropolitan area for one day of interaction and discussion. The Theory Day features several (usually 4-5) hour-long presentations by leading theoretical computer scientists about state-of-the-art advances in various areas. Some presentations give a survey of the latest advances in some area, while others may concentrate on a particular result. The meeting is free and open to everyone; in particular, students are encouraged to attend.

For directions, please go to: http://www.cs.columbia.edu/theory/directions.html

Program

9:30 - 10:00 Coffee and bagels

10:00 - 10:55 Prof. Sanjeev Arora (Princeton University) Probabilistically Checkable Proofs: the First Two Decades

10:55 - 11:05 Short break

11:05 - 12:00 Prof. Silvio Micali (MIT) The Second-Knowledge Mechanism

12:00 - 2:00 Lunch break

2:00 - 2:55 Dr. Lisa Zhang (Bell Labs) Energy Aware Network Design

2:55 - 3:15 Coffee break

3:15 - 4:10 Prof. Mario Szegedy (Rutgers University) Tardos and Moser meet Lovasz

If you would like to be added to the mailing list, please send email to talr@us.ibm.com.

Organizers:

Yevgeniy Dodis dodis@cs.nyu.edu Tal Malkin tal@cs.columbia.edu Tal Rabin talr@us.ibm.com Baruch Schieber sbar@us.ibm.com

==================================================================

Abstracts

Probabilistically Checkable Proofs: the First Two Decades

Prof. Sanjeev Arora (Princeton University)

The area of probabilistically checkable proofs (PCPs) is two decades old this year. At its heart is a counterintuitive phenomenon: mathematical proofs can be checked by looking at as few as three bits in them! This realization transformed our understanding of computational complexity theory, especially the approximability of NP-hard problems. Along the way it has also thrown up many surprises, improving our understanding of several existing areas such as cryptography, approximation algorithms, average case complexity, subexponential algorithms, constraint satisfaction problems, embeddings of metric spaces, convex programming relaxations of combinatorial problems, learning theory, etc. It has spawned new areas of study such as property testing, list decoding, and the unique games conjecture. This survey talk will be a broad sweep through these connections, trying to explain why this area holds so much interest in theoretical computer science, with pauses at some of my favorite open problems and promising directions. It should be accessible to a broad audience.

==================================================================

The Second-Knowledge Mechanism

Prof. Silvio Micali (MIT)

In mechanism design, the classical way to model the players. uncertainty about their opponents in a setting of incomplete information is assuming a "Bayesian." We instead model it in a SET-THEORETIC, and yet LEVERAGEABLE, way.

In auctions of a single good, we put forward a new revenue benchmark B, lying between the highest and second-highest valuation, and (1) provide a new mechanism that perfectly and robustly achieves B under mutual knowledge of rationality, and (2) prove that no mechanism can even approximate B in a robust way, not only in dominant/undominated strategies, but also ``at equilibrium".

Joint work with Jing Chen.

==================================================================

Energy Aware Network Design

Dr. Lisa Zhang (Bell Labs)

Given a network, a set of demands and a cost function f, the min-cost network design problem is to route all demands with the objective of minimizing the total cost f(l_e) over all links e, where l_e is the traffic load on e under the routing. We focus on cost functions of the form f(x) = sigma + x^alpha with f(0)=0. For alpha <= 1, the problem corresponds to the well-studied Buy-at-Bulk network design.

We examine the less studied scenario of alpha>1, which is motivated by power minimization in network design. This cost function aims to model a range of computing and communication devices for which doubling processing speed more than doubles the power consumption. We present a polylogarithmic approximation algorithm. Our approach builds upon techniques such as well-linked decomposition, construction of expanders via matchings, and edge-disjoint routing in well-connected graphs.

We will also briefly touch upon a scheduling problem in the context of power minimization.

This is joint work with Matthew Andrews and Spyros Antonakopoulos.

==================================================================

Tardos and Moser meet Lovasz

Prof. Mario Szegedy (Rutgers University)

Lovasz's Local Lemma (LLL) finds a wide variety of applications in combinatorics and computer science. It allows to deduce the existence of a global solution for any system of constraints, whose variable set has intersection graph G, as long as the probability that any given constraint is not satisfied is below a threshold that depends only on the maximal degree of G. Beck has turned Lovasz's existence proof into an algorithm, but with a weaker dependence on the max degree. Following several improvements Moser, and Moser and Tardos have developed a very simple algorithm, which is also optimal, when the degree tends to infinity. Their analysis reaches a natural bound (in terms of the entire G) with which the lemma was stated in the original paper of Lovasz.

Lovasz's original lemma is more general, and the events are not necessarily defined via an intersection graph for a family of constraints, but rather by a graph G that describes dependencies. For a fixed dependency graph G, the exact criterion under which the general LLL applies, is given by Shearer. We show that the analysis of the MT algorithm can be improved all the way to Shearer's bound. In particular, whenever the probabilities are 1-epsilon factor within Shearer's bound, the Moser-Tardos algorithm runs in expected time at most n /epsilon. This makes the efficient version an exact match to the non-efficient one. We also give a deeper reason for this co-incidence: A variant of the Moser-Tardos argument can actually prove the general LLL (although not "efficiently," but "efficiency" does not make much sense here, anyway).

Finally, we show that variable framework represents a real restriction. The LLL bound for the variable version for some G is higher than for the general version. This, of course, raises the question if the MT algorithm can efficiently achieve this higher bound.

Joint work with Kashyap Kolipaka

Notes:

In-person attendance only available to those with active NYU ID cards.


How to Subscribe