Authors: Matthew D. Zeiler and Rob Fergus

Title: Learning Image Decompositions with Hierarchical Sparse Coding

Abstract:
We present a hierarchical model that learns image decompositions via alternating layers of convolutional sparse coding and max pooling. 
When trained on natural images, the layers of our model capture image information in a variety of forms: low-level edges, mid-level edge junctions, 
high-level object parts and complete objects. To build our model we rely on a novel inference scheme that ensures each layer reconstructs 
the input, rather than just the output of the layer directly beneath, as is common with existing hierarchical approaches. 
This scheme makes it possible to robustly learn multiple layers of representation and we show a model with 4 layers, trained on images 
from the Caltech-101 dataset. We use our model to produce image decompositions that, when used as input to standard classification schemes, 
give a significant performance gain over low-level edge features and yield an overall performance 
competitive with leading approaches.