Colloquium Details

The IBM Research / NYU / Columbia Theory Day at CUNY

Speaker: The IBM Research / NYU / Columbia Theory Day at CUNY

Location: CUNY Graduate Center, 365 Fifth Avenue Recital Hall

Date: December 11, 2009, 9:30 a.m.

Host: CUNY Graduate Center

Synopsis:

Program

9:30 - 10:00 Coffee and bagels

10:00 - 10:55 Prof. Costis Daskalakis (MIT) On the Complexity of Approximating a Nash Equilibrium

10:55 - 11:05 Short break

11:05 - 12:00 Dr. Yael Tauman-Kalai (Microsoft Research) A Survey on Leakage Resilient Cryptography

12:00 - 2:00 Lunch break

2:00 - 2:55 Prof. Bobby Kleinberg (Cornell University) Pricing Randomized Allocations

2:55 - 3:15 Coffee break

3:15 - 4:10 Prof. Moni Naor (Weizmann Institute and Princeton) Backyard Cuckoo Hashing: Constant Worst-Case Operations with a Succinct Representation

For directions, please see http://www.cs.gc.cuny.edu/dept/location

To subscribe to our mailing list, follow instructions at http://www.cs.nyu.edu/mailman/listinfo/theory-ny

Organizers: Amotz Barnoy amotz@sci.brooklyn.cuny.edu Yevgeniy Dodis dodis@cs.nyu.edu Tal Rabin talr@us.ibm.com Baruch Schieber sbar@us.ibm.com Cliff Stein cliff@ieor.columbia.edu

============================================================

Abstracts

On the Complexity of Approximating a Nash Equilibrium

Prof. Costis Daskalakis (MIT)

According to Bob Aumann, strictly competitive games---a generalization of zero-sum games---are "one of the few areas in game theory, and indeed in the social sciences, where a fairly sharp, unique prediction is made". We present a generalization of these games to multi-player games played on a network, where the nodes are players and the edges are strictly competitive games between their endpoints; that is, we are looking at a network of competitors. Our generalization of the minmax theorem to the network setting comes with interesting consequences: convexity of equilibria, polynomial-time tractability, and convergence of no-regret algorithms to equilibria. And what about extending our results beyond zero-sum games? Previous work has established that computing exact equilibria is computationally intractable, but research on approximation algorithms has not made progress beyond finite values of the approximation, while inapproximability results have been evading current techniques. We provide the first inapproximability result for Nash equilibria, for finite values of relative approximation.

============================================================

A Survey on Leakage Resilient Cryptography Dr. Yael Tauman-Kalai (Microsoft Research)

Traditionally, the theory of cryptography community proved security of cryptographic primitives under the assumption that no information about the secret key is leaked. However, there is a growing realization that in reality information about the secret key may be leaked via various so called "side channel" attacks. Thus, the problem of designing cryptographic primitives that are provably secure even with keys which may be partly compromised has recently gained much popularity. In this talk, I will survey some of these recent results.

============================================================

Pricing Randomized Allocations

Prof. Bobby Kleinberg (Cornell University)

We study the use of randomized outcomes (henceforth, "lotteries") in mechanism design. It is well known to economists that optimal mechanisms for single-parameter agents do not randomize over outcomes, but this ceases to be the case when one considers multi-parameter problems. To what extent can a seller improve revenue by pricing lotteries rather than items, and does this modification of the problem affect its computational tractability? We investigate these questions in the context of an archetypical profit maximization problem: selling heterogeneous items in unlimited supply to unit-demand bidders. The answers hinge on whether consumers can purchase only one lottery (the buy-one model) or purchase any set of lotteries and receive an independent sample from each (the buy-many model). In the buy-one model, lotteries can improve revenue by an unbounded factor, and an optimal lottery pricing system can be computed in polynomial time despite the inapproximability of the corresponding item pricing problem. In the buy-many model, lotteries improve profit by only a logarithmic factor and the optimal pricing problem is inapproximable unless NP has subexponential-time randomized algorithms.

This is joint work with Patrick Briest, Shuchi Chawla, and Matt Weinberg.

============================================================

Backyard Cuckoo Hashing: Constant Worst-Case Operations with a Succinct Representation

Prof. Moni Naor (Weizmann Institute and Princeton)

The performance of a dynamic dictionary is measured mainly by its update time, lookup time, and space consumption. Although the first analysis of a dynamic dictionary dates back more than 45 years ago (when Knuth analyzed linear probing in 1963), the trade-off between these aspects of performance is still not completely understood.

In this talk I will focus on a recent line of work with the goal of achieving the best possible performance guarantees simultaneously. In particular, the following constructions will be described:

-- A de-amortization of cuckoo hashing that stores n elements using roughly 2n memory words, and guarantees constant-time time operations in the worst case with high probability, for any sequence of polynomially many operations.

-- The first dynamic dictionary that enjoys the best of both worlds: a two-level variant of cuckoo hashing that stores n elements using (1+o(1))n memory words, and guarantees constant-time operations in the worst case as above. The construction is based on augmenting cuckoo hashing with a "backyard" that handles a large fraction of the elements, together with a de-amortized perfect hashing scheme for eliminating the dependency on bin sizes.

--- A variant of the previous construction that stores n elements using only (1+o(1))B bits, where B is the information-theoretic lower bound for representing a (static) set of size n taken from a universe of size u, and guarantees constant-time operations in the worst case with high probability, as before. This problem was open even in the amortized setting.

Joint work with Yuriy Arbitman and Gil Segev.

Notes:

In-person attendance only available to those with active NYU ID cards.


How to Subscribe