|
| 1 | +# Deep filter banks for texture recognition, description, and segmentation |
| 2 | + |
| 3 | +The code provided runs the evaluation of RCNN and FV-CNN on various texture and material datasets (DTD, FMD, KTH-TIPS2b, ALOT), as well as for other domains: object (VOC07), scene (MIT Indoor), and fine-grained (CUB). |
| 4 | +The results of these experiments are contained in Table 1 and 2 in the paper ** Deep Filter Banks for Texture Recognition and Segmentation, M. Cimpoi et al., CVPR 2015. ** and Tables 3, 4, 5, 6 in the paper **Deep filter banks for texture recognition, description, and segmentation, M. Cimpoi et al., http://arxiv.org/abs/1507.02620 |
| 5 | + |
| 6 | +## Getting starded |
| 7 | + |
| 8 | +Once you have downloaded the code, make sure you installed the dependencies (see below). |
| 9 | +Download the datasets you want to evaluate on, and link to them or copy them under data folder, in the location of your repository. Download the models (VGG-M, VGG-VD and AlexNet) in data/models. It is slightly faster to download them manually from here: http://www.vlfeat.org/matconvnet/pretrained/ |
| 10 | +Once done, simply run the `run_experiments.m` file. |
| 11 | + |
| 12 | +In `texture_experiments.m` you could remove (or add) dataset names to the `datasetList` cell. Make sure you adjust the number of splits accordingly. The datasets are specified as {'dataset_name', <num_splits>} cells. |
| 13 | + |
| 14 | +### Dependencies |
| 15 | + |
| 16 | +The code relies on [vlfeat](http://www.vlfeat.org/), and [matconvnet], which should be downloaded and built before running the experiments. |
| 17 | +Run git submodule update -i in the repository download folder. |
| 18 | + |
| 19 | +To build vlfeat, go to <DEEP-FBANKS_DIR>/vlfeat and run make; ensure you have MATLAB executable and mex in the path. |
| 20 | +To build matconvnet: in MATLAB, go to <DEEP-FBANKS_DIR>/matconvnet/matlab and run vl_compilenn; ensure you have CUDA installed, and nvcc in the path. |
| 21 | + |
| 22 | +For LLC features (Table 3 in arxiv paper), please download the code from [http://www.robots.ox.ac.uk/~vgg/software/enceval_toolkit](http://www.robots.ox.ac.uk/~vgg/software/enceval_toolkit) and copy the following to the code folder (no subfolders!) |
| 23 | + |
| 24 | +* `enceval/enceval-toolkit/+featpipem/+lib/LLCEncode.m` |
| 25 | +* `enceval/enceval-toolkit/+featpipem/+lib/LLCEncodeHelper.cpp` |
| 26 | +* `enceval/enceval-toolkit/+featpipem/+lib/annkmeans.m` |
| 27 | + |
| 28 | +Create the corresponding dcnnllc encoder type (see the examples provided in run_experiments.m for BOVW, VLAD or FV). |
| 29 | + |
| 30 | +### Paths and datasets |
| 31 | + |
| 32 | +The `<DATASET_NAME>_get_database_.m` files generate the imdb file for each dataset. Make sure the datasets are copied or linked to manually in the data folder. |
| 33 | + |
| 34 | +The datasets are stored in individual folders under data, in the current code folder, and experiment results are stored in data/exp01 folder, in the same location as the code. Alternatively, you could make data and experiments |
| 35 | +symbolic links pointing to convenient locations. |
| 36 | + |
| 37 | +Please be aware that the descriptors are stored on disk (in cache folder, under `data/exp01/<experiment-dir>`), and may require large amounts of free space (especially FV-CNN features). |
| 38 | + |
| 39 | + |
| 40 | +### Dataset and evaluation |
| 41 | + |
| 42 | +Describable Textures Dataset (DTD) is publicly available for download at: |
| 43 | +[http://www.robots.ox.ac.uk/~vgg/data/dtd](http://www.robots.ox.ac.uk/~vgg/data/dtd). You can also download the precomputed DeCAF features for DTD, the paper and evaluation results. |
| 44 | + |
| 45 | +Our additional annotations for OpenSurfaces dataset are publicly available for download at: |
| 46 | +http://www.robots.ox.ac.uk/~vgg/data/wildtex/ |
| 47 | + |
| 48 | +Code is available at: |
| 49 | +TODO: GITHUB LINK |
| 50 | + |
| 51 | +Code for CVPR14 paper (and Table 2 in arXiv paper): |
| 52 | +http://www.robots.ox.ac.uk/~vgg/data/dtd/download/desctex.tar.gz |
| 53 | + |
| 54 | +### Citation |
| 55 | + |
| 56 | +If you use the code and data please cite the following in your work: |
| 57 | + |
| 58 | +FV-CNN and OpenSurfaces Additional Annotations: |
| 59 | +@Article{Cimpoi15a, |
| 60 | + Author = "Cimpoi, M. and Maji, S., Kokkinos, I. and Vedaldi, A.", |
| 61 | + Title = "Deep Filter Banks for Texture Recognition, Description, and Segmentation" |
| 62 | + Journal = "arXiv preprint arXiv:1507.02620", |
| 63 | + Year = "2015", |
| 64 | +} |
| 65 | + |
| 66 | +@InProceedings{Cimpoi15, |
| 67 | + Author = "Cimpoi, M. and Maji, S. and Vedaldi, A.", |
| 68 | + Title = "Deep Filter Banks for Texture Recognition and Segmentation", |
| 69 | + Booktitle = "IEEE Conference on Computer Vision and Pattern Recognition", |
| 70 | + Year = "2015", |
| 71 | +} |
| 72 | + |
| 73 | +DTD Dataset and IFV + DeCAF: |
| 74 | +@inproceedings{cimpoi14describing, |
| 75 | + Author = "M. Cimpoi and S. Maji and I. Kokkinos and S. Mohamed and A. Vedaldi", |
| 76 | + Title = "Describing Textures in the Wild", |
| 77 | + Booktitle = "Proceedings of the {IEEE} Conf. on Computer Vision and Pattern Recognition ({CVPR})", |
| 78 | + Year = "2014", |
| 79 | +} |
| 80 | + |
0 commit comments