Colloquium Details

Theory Day

Speaker:

Location: Warren Weaver Hall 109

Date: March 2, 2001, 9:30 a.m.

Host: Yevgeniy Dodis

Synopsis:

From Erdos to Algorithms and Back Again

 

Prof. Joel Spencer
New York University

10:00 a.m. - 10:50 a.m. 

 

Abstract

 

Paul Erdos attempted, very often successfully, to prove the existence of mathematical objects with particular properties. His methodologies have been adapted with much success to give efficient algorithms for creating those objects. The algorithmic approach, provides new insight into the mathematical proofs. In many cases, as we shall illustrate, it has led to new theorems and conjectures.

 

On the Approximability of Trade-offs

 

Dr. Mihalis Yannakakis 
Bell Labs, Lucent Technologies

10:50 a.m. - 11:40 a.m.

 

Abstract

 

We discuss problems in multiobjective optimization, in which solutions to a combinatorial optimization problem are evaluated with respect to several cost criteria, and we are interested in the trade-off between these objectives (the so-called Pareto curve). We point out that, under general conditions, there is a polynomially succinct curve that approximates the Pareto curve within any desired accuracy, and give conditions under which this approximate Pareto curve can be constructed in polynomial time. We discuss in more detail the class of linear multiobjective problems and multiobjective query optimization.

LUNCH BREAK - 11:40 a.m. - 1:30 p.m.

 

On the Standard Written Proof

 

Prof. Johan Hastad 
IAS & Royal Institute of Technology

1:30 p.m. - 2:20 p.m.

 

Abstract

 

We use the PCP-theorem and the parallel repetition theorem of Raz together with the long code of Bellare, Goldreich and Sudan to obtain a very interesting proof for any NP statement. This written proof can be checked in many ways and through natural reductions yield inapproximability results for many NP-hard optimization problems such as satisfying the maximal number of linear equations, Max-Sat and set-splitting. We will discuss the general proof method, give examples of constructions and show how to analyze these constructions.

 

What Can You Do in One or Two Passes?

 

Prof. Ravi Kannan 
Yale University

2:20 - 3:10

 

Abstract

 

We consider a collection of graph and matrix problems where the input data is very large. Computing with a small randomly chosen sub-matrix (or subgraph) can be shown to yield approximate solutions to many problems like the max-cut problem in dense graphs. Here, instead of blindly doing random sampling, we take a two-phase approach. In the first phase, we make one pass through the data to compute the probability distribution with which to sample. Then in the second phase, we draw a small sample and compute with it. We are able to tackle a much larger class of problems which includes numerical problems arising in "Principal Component Analysis" as well as graph and combinatorial problems; also, a weaker notion of density, allowing for differing importance (weight) attached to different rows/vertices suffices. We will discuss in general this two-phase paradigm and argue that it is consistent with the relative growth of the computing resources - speed and space.

Coffee break - 3:10 p.m. - 3:30 p.m.

 

An Application of Complex Semidefinite Programming to Approximation Algorithms

 

Dr. David Williamson 
IBM Research

3:30 - 4:20

 

Abstract

 

A number of recent papers on approximation algorithms have used the square roots of unity, -1 and 1 to represent binary decision variables for problems in combinatorial optimization, and have relaxed these to unit vectors in real space using semidefinite programming in order to obtain near optimal solutions to these problems. In this talk, we consider using the cube roots of unity, 1, e**{i(2/3)pi}, and e**{i(4/3)pi}, to represent ternary decision variables for problems in combinatorial optimization. Here the natural relaxation is that of unit vectors in complex space. We use an extension of semidefinite programming to complex space to solve the natural relaxation, and use a natural extension of the random hyperplane technique to obtain near-optimal solutions to the problems. In particular, we consider the problem of maximizing the total weight of satisfied equations x_u - x_v = c (mod 3) and inequations x_u - x_v not= c (mod 3), where x_u is in {0,1,2} for all u. This problem can be used to model the MAX 3-CUT problem and a directed variant we call MAX 3-DICUT. For the general problem, we obtain a .79373-approximation algorithm. If the instance contains only inequations (as it does for MAX 3-CUT), we obtain a performance guarantee of 3/(4pi**2)(arccos(-1/4))**2 + 7/12, which is about .83601. This compares with proven performance guarantees of .800217 for MAX 3-CUT (by Frieze and Jerrum) and 1/3 + 10**{-8} for the general problem (by Andersson, Engebretson, and Hastad). This is joint work with Michel X. Goemans of MIT

Notes:

In-person attendance only available to those with active NYU ID cards.


How to Subscribe