VLG Group
Group Meetings
Y. LeCun's website
CS at Courant
Courant Institute

VLG: The Vision-Learning-Graphics Group
The Courant Institute of Mathematical Sciences
Center for Neural Science
New York University

The Computational and Biological Learning Lab is part of
the Vision-Learning-Graphics Group (VLG) of
the NYU Computer Science Department, which is part of
the Courant Institute of Mathematical Sciences at
New York University.

Our research is focused on Machine Learning, Computer Vision, Robotics, Computational Neuroscience, and various related topics.

Contact information:
Yann LeCun
The Courant Institute of Mathematical Sciences
New York University
715 Broadway, New York, NY 10003, USA.
Web: http://yann.lecun.com. Email: yann [ a t ] cs.nyu.edu; Telephone: (212)998-3283;

Rob Fergus
The Courant Institute of Mathematical Sciences
New York University
715 Broadway, New York, NY 10003, USA.
Web: http://www.cs.nyu.edu/~fergus. Email: fergus [ a t ] cs.nyu.edu;


For an up-to-date list of events and seminars follow this link.

older news and events

  • 2010-10-21: There is article about Yann and Convolutional Networks in The Economist.
  • 2009-05-06: Final class projects for Yann's undergraduate course Introduction to Robotics were featured on a number of gadget and robot related blogs, particularly slashgear, and RoboCommunity. Both blogs mention the final class project, which consisted in getting Rovio robots to play soccer. The Rovios are controlled through Wifi by laptop computers programmed in Lush to run vision and control algorithms.
  • 2009-05-01: Sebastian Seung (MIT) will give the NYU Computer Science Colloquium: Tracing the brain's wires: image segmentation via machine learning, Friday, May 1, 2009 at 11:30AM, Warren Weaver Hall, Room 1302, 251 Mercer Street, New York, NY, 10012-1110
  • 2009-04-30: CBLL Thesis Defense: Marc'Aurelio Ranzato: Unsupervised Learning of Feature Hierarchies. Thursday April 30, 2009 at 1:30 PM, VLG: Room 1221, 715/719 Broadway, New York, NY 10003.
  • 2009-04-19: Announcing the 2009 ICML Workshop on Learning Feature Hierarchies. This workshop will take place on June 18, 2009 in Montreal, Canada, in conjunction with the 26th International Conference on Machine Learning (ICML 2009). The organizers are: Kai Yu (NEC Labs-America), Ruslan Salakhutdinov (U. of Toronto), Yann LeCun (Courant Institute, NYU), Geoffrey Hinton (U. of Toronto) and Yoshua Bengio (U. of Montreal).
  • 2009-04-15: Podcast of an interview with Yann LeCun: mp3 podcast available from the New York Academy of Science website.
  • 2009-04-10: New paper on feature learning available: Learning Invariant Features through Topographic Filter Maps, to appear in CVPR 2009. The method describes a method to learn SIFT-like locally-invariant feature detectors from data. The method minimizes a reconstruction criterion under a "block sparsity constraint", which minimizes the number of "pools" of features that are active. filters whose outputs are grouped in a pool end up detecting similar features, and the pool outputs can be interpreted as complex-cell like locally-invariant features.
  • 2008-11-28: New papers on off-road mobile robot available: Learning Long-Range Vision for Autonomous Off-Road Driving and A Multi-Range Architecture for Collision-Free Off-Road Robot Navigation. These two long paper will appear in the Journal of Field Robotics. They describe, in gory detail, the off-road mobile robot system we developed for the DARPA LAGR program.
  • 2008-11-27: New paper on epilepsy seizure prediction available: Comparing SVM and Convolutional Networks for Epileptic Seizure Prediction from Intracranial EEG. The bottom line: our convolutional net-based system can predict epilepsy seizures one hour in advance for all patients in the plublicly available Freiburg dataset with no false alarm.
  • 2008-10-19: Jeff Han on the Daily Show. Our friend and collaborator Jeff Han was on the Daily Show yesterday showing off his magic touch screen as part of a spoof conspiracy thriller involving John Oliver and CNN's John King. In case you missed it, here is the video.
  • 2008-10-16: Old Courant Institute Reports at the Internet Archive. The Internet Archive has scanned the archives of old tech reports the Courant Institute of Mathematical Science (our home institution) and put it up on their website. The best thing is that the Archive uses our own DjVu format to distribute the documents (among other inferior formats ;-). The collection includes several reports by The Man Himself: Richard Courant. The "DjVu" links on the archive website actually fires up a Java DjVu viewer (which is kind of slow), instad of calling your DjVu viewer or DjVu plug-in in your browser. The best is to click on the "http" link, and then click on the DjVu file. This project is funded by a grant from the Sloan foundation to the Archive.
  • 2008-10-15: upcoming seminars: 10/28: Killian Weinberger (Yahoo), 10/29: Ronan Collobert (NEC), 11/26: Alex Berg (Columbia), 12/03: Fei-Fei Li (Princeton), 12/17: Jason Weston (NEC). click here for details.
  • 2008-10-02: CBLL is awarded an NSF grant from the Energing Frontiers in Research and Innovation program entitled "Deep Learning in the Mammalian Visual Cortex". Collaborators in this project are Ed Boyden (MIT), Yang Dan (Berkeley), Yann LeCun (NYU), Andrew Ng (Stanford), and Sebastian Seung (MIT). There an NSF announcement, and a press release from NYU.
  • 2008-04-18: A cool Video of our off-road mobile robot is available below
    . More details about the project (and more, higher quality videos) are available from our LAGR project web page.

  • 2008-04-14: RARE SPECIAL EVENT: Talk by Geoff Hinton (U. of Toronto), Monday 14th at 11:30 at the Courant Institute, Warren Weaver Hall, Room 1302. Title: An efficient way to learn multiplicative interactions
  • 2008-04-12: coming CBLL seminars: 04/16: Michael Littman (Rutgers); 04/23: Daqvid Blei (Princeton); 04/30: Tony Jebara (Columbia).
  • 2008-04-11: Video and slides of Yann's talk on deep learning at Google and Stanford.
  • 2008-04-11: A new video describing our LAGR off-road mobile robot system is available here.
  • 2008-03-19: [Press] Another article about Jeff Hawkins, this time in Forbes magazine, in which Yann is quoted. He is described as a "neuroscience professor at MIT". They must have mixed up affiliations and quotes between Yann and Sebastian Seung.
  • 2008-03-08: [Press] There is an article in The Economist this week about Jeff Hawkins and his new company Numenta, in which Yann and Geoffrey Hinton are quoted.
  • 2008-02-26: [Press] There is an article about our robotics project (LAGR) on the on-line version of Scientific American.
  • 2008-02-25: [Press] new videos of a 15 minute interview of Yann LeCun on machine learning research, lecturing styles, where NIPS is going, the philosophy of science, and various other topics. (provided by Video Lectures).
  • 2008-02-25: new videos and slides available on the talk page, including Yann's talks at the Deep Learning NIPS 2007 satellite session, and at the NIPS workshop on large-scale and efficient learning.
  • 2008-02-24: slides and videos of all the talks at the Deep Learning NIPS 2007 satellite session are now available at the meeting's main web site.
  • 2007-04-18: RARE EVENT: Geoffrey Hinton is giving a talk at NYU on April 18th at 11:30 AM (see announcement).
  • 2007-04-11: CBLL seminar by Pradeep Ravikumar from CMU. CBLL, April 11th at 11:30 AM.
  • 2007-03-01: A new paper on the limitations of kernel methods and the challenges of scaling up learning algorithms is available.
  • 2006-08-21: A tutorial paper on Energy-Based Learning is finally available. The slides of the corresponding tutorial talk are also available.
  • 2006-02-06: [Press] an article in Computerworld about machine learning talks about Yann and mentions convolutional networks.
  • 2005-10-20: CBLL seminar by Sebastian Seung. See details here.
  • 2005-09-07: Yann's 4-hour tutorial at the 2005 IPAM Graduate Summer School on Intelligent Extraction of Information from Graphs and High Dimensional Data, is available. The slides and the streaming video of the lectures are available here. The tutorial includes such topics as energy-based models, convolutional nets, trainable metrics, and graph transformer networks, with applications (including live demos) to handwriting recognition, invariant object recognition, face detection, image segmentation, and vision-based obstacle avoidance for mobile robots. The Summer school's main web site includes videos and slides of all the talks.
  • 2005-02-28: upcoming CS colloquia on machine learning: Lawrence Saul (02/28), Tong Zhang (03/09), Alex Gray (03/11), David Blei (04/04), Gert Lanckriet (04/20), Guy Lebanon (04/22).
  • 2005-02-03: upcoming seminars by Alex Pouget, John Langford, and Larry Carin.
  • 2005-02-02: new web pages for the face detection project and the Energy-Baed Model project.
  • 2005-02-01: Yann will speak at two summer schools this year: the Machine Learning Summer School at Chicago, May 16-27 2005, and the IPAM summer school on Intelligent Extraction of Information from Graphs and High Dimensional Data to be held at IPAM/UCLA, July 11-29
  • 2004-12-01: CBLL in partnership with Net-Scale Technologies, has been selected as of one of the 8 teams that will participate in he US government-funded LAGR project (Learning Applied to Ground Robots).
  • 2004-08-24: CBLL has an open postdoc position for a research project on vision-based robot navigation using machine learning methods.
  • 2004-08-23: The NORB dataset for object recognition is available

Calendar of Talks and Seminars at CBLL and around >>>>.

News and Events >>>>

Research at CBLL

Click here for a full list of past and present projects at CBLL.

The Computational and Biological Learning Lab at the Courant Institute was founded in September 2003. Our main research goal is to devise new methods, theories, and algorithms to derive knowledge from data: the process commonly known as learning. Understanding learning is a necessary step toward building intelligent systems that can learn from experience. It may also give us a better understanding of the mechanisms that underly human and animal intelligence.

Over the last several years, Machine Learning methods have been applied with considerable success to such tasks as prediction, classification, detection, recognition, diagnosis, and to practically every areas of data modeling and decision making. Our interests at the CBLL include:

  • "Deep" Learning, Unsupervised Learning: learning "deep" feature hierarchies: learning high-level internal representation of data from unlabeled examples.
  • Large-scale learning: the design of large-scale learning systems involving tens of thousands of input variables, and millions of training samples;
  • Biological learning: Elucidating the neural mechanisms of learning in animals and humans. What is the learning algorithm of the cortex?
  • Energy-Based Models: non-probabilistic graphical models without intractable partition functions.
  • Learning, perception, vision: visual perception is one of the most challenging domains for learning. We are developing learning systems for such tasks as the detection and recognition of objects in natural images.
  • Computational models of biological vision: what better way to understand biological vision systems than to build and artificial one?
  • Mobile Robots: building trainable vision systems to make mobile robots navigate and avoid obstacles in rugged terrain.
  • Relational Graphical Models for Regression: we are building new types of relational graphical model for regression problems. The main application is predicting house prices.
  • End-to-end learning: using learning in every stages and components of a system, from raw inputs to ultimate output;
  • Applications: we work on all applications of machine learning and pattern recognition: data mining, biological data analysis, mobile robotics, face detection/recognition, object recognition in images, document understanding...

Other topics:

We are also interested in the following topics, but have no active projects in these areas at the moment:

  • Digital Libraries: image compression, document understanding, electronic preservation and dissemination of Culture (see DjVu).
  • Signal processing, image processing, data compression (see DjVu).
  • Information theory, and the physical foundations of computing.

Click here for more information about research projects at CBLL with papers, pictures, demos, videos....

"Courses (past and present)"

Visit the index of courses, with course descriptions, slides, reading lists, homework...