Speaker: Sebastian Seung, Howard Hughes Medical Institute & MIT
Location: Warren Weaver Hall 1302
Date: May 1, 2009, 11:30 a.m.
Host: Yann LeCun
The problem of boundary detection has been studied intensively since the dawn of computer vision. Originally, thresholded gradient filters were used to detect boundaries. Later on, the output of gradient filtering was "cleaned up" by incorporating contextual information through relaxation labeling, Markov random fields, graph partitioning, and other formalisms. More recently, interest in boundary detection has been revived by the Berkeley Segmentation Data Set, in which boundaries have been manually traced in natural images by humans. These datasets allow quantitative benchmarking of image segmentation performance. Furthermore, they enable the use of machine learning to improve accuracy by training a computer to emulate human boundary judgments. However, one problem with natural images is that the notion of an object boundary is not well-defined, so that there is considerable disagreement between human judgments. In my laboratory, we have instead been studying images from biological microscopy, in which object boundaries are well-defined. In the conventional machine learning approach, a boundary detector is trained by minimizing its pixel-level disagreement with humans. However, this criterion is not appropriate if boundary detection is only a means to the ultimate goal of image segmentation, rather than an end in itself. We are developing new methods for training boundary detectors that specifically target segmentation error rather than pixel error. I will demonstrate our methods with a challenging application, tracing the branches of neurons in 3d images of brain tissue taken by electron microscopy. The empirical results show a dramatic reduction in the number of splits and mergers, which are true segmentation errors.
Refreshments will be offered starting 15 minutes prior to the scheduled start of the talk.