**Speaker:**

**Location:**
Warren Weaver Hall 109

**Date:**
April 19, 2002, 9:30 a.m.

**Host:** Yevgeniy Dodis

**Synopsis:**

Prof. Noga Alon

Tel Aviv University and IAS Princeton

We consider the problem of learning a matching (i.e., a graph in which all vertices have degree 0 or 1) in a model where the only allowed operation is to query whether a set of vertices induces an edge. This is motivated by a problem that arises in molecular biology. In the deterministic non-adaptive setting, we prove a (0.25+o(1))n^2 upper bound and a nearly matching 0.16 n^2 lower bound for the minimum possible number of required queries. In contrast, if we allow randomness then we obtain (by a randomized, non-adaptive algorithm) a much lower O(n \log n) upper bound, which is best possible (even for randomized fully adaptive algorithms).

Joint work with R. Beigel, S. Kasif, S. Rudich and B. Sudakov

**SHORT BREAK - 10:55 a.m. - 11:05 a.m.**

**11:05 a.m. - 12:00 a.m.**

Prof. Phil Klein

Brown University and WaveMarket

As part of a software suite for supporting location-based services, WaveMarket has developed a fast engine for computing a single-source, single-sink shortest path in a road network. The engine is faster than competing products by more than an order of magnitude. In this talk, we describe some of the the algorithmic ideas behind this advance.

**Lunch break - 12:00 noon - 2:00 p.m.**

**2:00 p.m. - 2:55 p.m.**

Prof. Lisa Fleischer

CMU

Standard network flows capture many aspects of routing problems but miss one crucial element: flows occur over time. Ford and Fulkerson recognized this omission when they introduced flows over time. In this model, there is a transit time, or delay, associated with each arc to address the fact that flow does not travel instantaneously across the arc. Now, given a time bound, it is possible to consider the flow-over-time equivalents of standard network flow problems. This allows for more accurate modelling of problems in telecommunications, transportation, financial planning, manufacturing, and logistics.

Ford and Fulkerson showed how to solve the maximum flow over time problem by reducing it to a minimum cost flow problem. However, the minimum cost flow over time problem is already NP-hard. And, the obvious linear program formulation of the multicommodity flow over time problem depends linearly on the time bound, so is in general not of polynomial size. Thus, problems in flows over time are inherently more difficult than the standard flow problems.

We will introduce flows over time, review some of the classical results, and describe a newly devised FPAS for both the multicommodity flow over time problem and the same problem with costs that is joint work with Martin Skutella. Time permitting we may touch upon further extensions.

**Coffee break - 2:55 noon - 3:15 p.m.**

**3:15 - 4:10**

Dr. Venkatesan Guruswami

University of California, Berkeley

We present a simple, explicit construction of error-correcting codes that have a near-optimal rate vs. error-correction trade-off and are encodable and decodable in linear time. Formally, we construct, for every $r$ ($0 < r < 1$), a family of codes of rate $r$ that can be encoded in linear time and decoded in linear time from up to a fraction $(1-r-\epsilon)/2$ of errors, for arbitrary $\epsilon > 0$. The codes are defined over a fixed alphabet whose size depends only on $\epsilon$. These codes nearly match the ``Singleton bound'' and thus their rate vs. error-correction trade-off is essentially optimal, and so is their encoding/decoding time (up to constant factors). Previously known codes over a constant-sized alphabet that approached the Singleton bound (eg. certain algebraic-geometric codes) suffered from complicated constructions and/or high (certainly, super-linear) encoding/decoding complexity.

Concatenating these codes with a suitable inner code of constant size yields constructions of linear time encodable/decodable binary codes that match the so-called ``Zyablov bound'' (the best rate vs. distance trade-off known for explicit binary codes with reasonable decoding complexity), for the entire range of error fractions. In contrast, previously known linear-time binary codes, due to (Spielman, 1996), could only correct a very small fraction (about $10^{-6}$) of errors.

Our approach is to first construct, by adapting techniques from a recent work of (Zemor, 2001), certain high rate linear-time codes that can correct a small fraction of errors, and then to increase the error-resilience using expanders akin to an approach used for erasure codes by (Alon, Edmonds, Luby, 1995).

Joint work with Piotr Indyk (MIT).

**Notes:**

Refreshments will be offered starting 15 minutes prior to the scheduled start of the talk.