Code release for Large Scale Visual Food Recognition
### Introduction

Our Progressive Region Enhancement Network (PRENet) mainly consists of progressive local feature learning and region feature enhancement. The former mainly adopts the progressive training strategy to learn complementary multi-scale finer local features, like different ingredient-relevant information. The region feature enhancement uses self-attention to incorporate richer contexts with multiple scales into local features to enhance the local feature representation. Then we fuse enhanced local features and global ones from global feature learning into the unified one via the concat layer.
During training, after progressively training the networks from different stages, we then train the whole network with the concat part, and further introduce the KL-divergence to increase the difference between stages for capturing more detailed features. For the inference, considering the complementary output from each stage and the concatenated features, we combine the prediction results from them for final food classification.
### Requirement
python 3.6
-python 3.6
PyTorch >= 1.3.1
-PyTorch >= 1.3.1
torchvision >= 0.4.2
-torchvision >= 0.4.2
dropblock
- PIL
- Numpy
- dropblock
### Data preparation
1. Download the food datasets. The file structure should look like:
```
dataset
├── class_001
| ├── 1.jpg
| ├── 2.jpg
| └── ...
├── class_002
| ├── 1.jpg
| ├── 2.jpg
| └── ...
│── ...
```
2. Download the training and testing list files, e.g. train_full.txt, test_full.txt
### Training
1.
### Training
2. Train from scratch with ``train.py``.
1. To train a `PRENet` on food datasets from scratch, run:
1. Download the pretrained model on Food2k [here](https://pan.baidu.com/s/1HMvBf0F-FpMIMPtuQtUE8Q)(Code: o0nj)
1. Download the pretrained model on Food2k from [google](https://drive.google.com/file/d/1gA_abY0d_0B6jXpeXNgCKBbSzc8iEHxU/view?usp=sharing)/[baidu](https://pan.baidu.com/s/1HMvBf0F-FpMIMPtuQtUE8Q)(Code: o0nj)
2. To evaluate a pre-trained `PRENet` on food datasets, run: