This is the prediction-only interface of 

    Data-Driven 3D Primitives for Single Image Understanding
    David F. Fouhey, Abhinav Gupta, and Martial Hebert
    At International Conference on Computer Vision, 2013

This is version 1.02 of the code

Please note that we cannot provide support and this code comes with
no guarantees.

You still need to download the supporting data (models and normals cache) in 
order to use the method. Please be careful which model you use on which
dataset: one model is for one fold of a 4-fold cross-validation split and 
the other is for the standard train/test split.

***************
* Basic setup *
***************

1) Download the code and data and unpack each to their own directory, $code 
   and $data respectively.

2) Update the code to match the location of the data. Edit getResourcePath.m
   and set dataPath to ${data}/.

3) You may have to recompile the HOG feature computation code, especially if you
   are not using a x86_64 linux machine. 

   Run compile.m inside matlab in the main directory. If something fails, go to 
   ${code}/detectorCode/voc-release4/ and modify compile.m 

4) Look at demo.m

At this point, the system should work. Look at demo.m, which will demonstrate
how to run the code and give some reasonable options for the inference
procedure. 

The main code is encapsulated in run3DP.m, which takes an options structure to 
decide what subroutines to run. run3DP.m explains the options structure.

***********
* Results *
***********

If you run nyuDemo with the canonical model specified in getResourcePath, it
will produce the results reported on the standard split of the NYU v2 data set.
The last digit on a few of the numbers (tenths of a degree/percent) round 
slightly differently from the webpage due to a combination of small code 
differences and repacking the data to single precision for distribution. 

This version of the code has been designed to produce stable results across 
systems and MATLAB versions by specifying certain behavior (e.g., edge 
detection in VP estimation, image resizing for transfer). Please let us know 
if you experience any major differences from the numbers reported below
so we can fix any remaining issues.

For reference, using on Ubuntu 12.04 with MATLAB R2013a and the canonical 
model from the website, we get: 

Mean        Median      RMSE        PGP 11.25   PGP 22.5    PGP 30
(RECTIFIED OUTPUT -- nd.rectifiedDenseMaps)
36.0617     20.4942     49.4289     35.9320     51.9601     57.7538
(UNRECTIFIED OUTPUT -- nd.denseMaps)
34.2388     30.0443     41.4003     18.5379     38.5759     49.9361

***************
* Suggestions *
***************

1) Speed: this implementation is not optimized and while convolution for 
detection can be multithreaded, the subsequent reasoning is not multithreaded.
Therefore, for best performance in batch, do not run a single instance, but 
instead run the method in parallel, using a single core per instance. This is 
illustrated in demo.m, which coordinates the work via a shared file system.

2) Confidence: you can use the returned confidence map for dense predictions 
as a way to evaluate predictions. In our experiments, the mean confidence map 
is well-correlated with the prediction error for all metrics we have used.

**********
* Bibtex *
**********

If you use this code in a publication, please cite:

@inproceedings{Fouhey13a,
    title = {Data-Driven {3D} Primitives for Single Image Understanding},
    author = {Fouhey, David F. and Gupta, Abhinav and Hebert, Martial},
    booktitle = {ICCV},
    year = {2013},
}

