Colloquium Details

New York Area Theory Day

Speaker: IBM/NYU/Columbia

Location: Schapiro (CEPSR) Building, Columbia University Davis Auditorium (Room 412)

Date: April 28, 2017, 9:30 a.m.

Host: Columbia University, New York

Synopsis:

The New York Area Theory Day is a semi-annual conference, aimed to bring together people in the New York metropolitan area for one day of interaction and discussion. The meeting is free and open to everyone; in particular, students are encouraged to attend.

For directions, please see here.

To subscribe to our mailing list, follow the instructions here.

Organizers:

 

Program 

9:30 - 10:00 Coffee and bagels
10:00 - 10:55 Professor Ran Canetti
On Single-Server Private Information Retrieval with Sublinear Work for Server
10:55 - 11:05 Short break
11:05 - 12:00 Professor Alina Ene
From Minimum Cut to Submodular Minimization: Leveraging the Decomposable Structure
12:00 - 2:00 Lunch break
2:00 - 2:55 Dr. Zeyuan Allen-Zhu
Faster Stochastic Gradient Descent: From Nesterov's Momentum to Katyusha Momentum
2:55 - 3:15 Coffee break
3:15 - 4:10 Professor Oded Regev
A Reverse Minkowski Theorem
4:30 - 5:30 Follow-up social (location TBD)

 

Speakers

Professor Ran Canetti (Boston University and Tel Aviv University)

On Single-Server Private Information Retrieval with Sublinear Work for Server

Private Information Retrieval (PIR) allows clients to obtain data from public databases without disclosing the locations accessed. Traditionally, the stress is on preserving sublinear work for the client, while the server's work is taken to inevitably be at least linear in the database size. Can we have schemes where the server's amortized per query work is sublinear?  While we know how to do it when the database is split among multiple non-colluding servers, the existence of single-server PIR with sublinear amortized server work has remained open.

We present a number of single-server PIR schemes where, following a polynomial-work preprocessing stage, the per-query work of both server and client is poly-logarithmic in the database size. The schemes offer different security properties under a range of assumptions. All schemes are based on low-degree extensions of the database in high-dimensional finite fields, akin to Reed-Muller encoding, along with additional mechanisms that allow for local decoding that hides the decoded points.

Joint work with Justin Holmgren and Silas Richelson.


Professor Alina Ene (Boston University)

From Minimum Cut to Submodular Minimization: Leveraging the Decomposable Structure

Submodular function minimization is a fundamental optimization problem that arises in several applications in machine learning and computer vision. The problem is known to be solvable in polynomial time, but general purpose algorithms have high running times and are unsuitable for large-scale problems. Recent work aims to obtain very practical algorithms for minimizing functions that are sums of “simple” functions. This class of functions provides an important bridge between functions with a rich combinatorial structure -- notably, the cut function of a graph -- and general submodular functions, and it brings along powerful combinatorial structure reminiscent of graphs as well as a fair bit of modeling power that greatly expands its applications. In this talk, we describe recent progress on designing very efficient algorithms for minimizing decomposable functions and understanding their combinatorial structure.

This talk is based on joint work with Huy Nguyen (Northeastern University) and Laszlo Vegh (London School of Economics).


Dr. Zeyuan Allen-Zhu (Institute for Advanced Study and Princeton)

Faster Stochastic Gradient Descent: From Nesterov's Momentum to Katyusha Momentum

Nesterov's momentum is famously known for being able to accelerate “full-gradient descent”, and thus useful in building fast first-order algorithms. However, in the offline stochastic setting, counterexamples exist and prevent Nesterov's momentum from providing similar acceleration, even if the underlying problem is convex.  In this talk, I shall discuss a Katyusha momentum framework that provides the first direct acceleration to stochastic gradient descent. This work is going to appear in STOC 2017.


Professor Oded Regev (New York University)

A Reverse Minkowski Theorem

Informally, Minkowski's first theorem states that lattices that are globally dense (have small determinant) are also locally dense (have lots of points in a small ball around the origin). This fundamental result dates back to 1891 and has a very wide range of applications.

I will present a proof of a reverse form of Minkowski's theorem, conjectured by Daniel Dadush in 2012, showing that locally dense lattices are also globally dense (in the appropriate sense).

Based on joint papers with Daniel Dadush and Noah Stephens-Davidowitz.


How to Subscribe