Skip to content

Rethinking Image Inpainting via a Mutual Encoder Decoder with Feature Equalizations. ECCV 2020 Oral

License

Notifications You must be signed in to change notification settings

KumapowerLIU/Rethinking-Inpainting-MEDFE

Repository files navigation

License CC BY-NC-SA 4.0 Python 3.6

Rethinking-Inpainting-MEDFE

MEDFE Show

Rethinking Image Inpainting via a Mutual Encoder Decoder with Feature Equalizations .

Hongyu Liu, Bin Jiang, Yibing Song, Wei Huang and Chao Yang.
In ECCV 2020 (Oral).

All rights reserved. Licensed under the CC BY-NC-SA 4.0 (Attribution-NonCommercial-ShareAlike 4.0 International)

The code is released for academic research use only. For commercial use, please contact kumapower@hnu.edu.cn.

Installation

Clone this repo.

git clone https://github.com/KumapowerLIU/Rethinking-Inpainting-MEDFE.git

Prerequisites

  • Python3
  • Pytorch >=1.0
  • Tensorboard
  • Torchvision
  • pillow

Dataset Preparation

We use Places2, CelebA and Paris Street-View datasets. To train a model on the full dataset, download datasets from official websites.

Our model is trained on the irregular mask dataset provided by Liu et al. You can download publically available Irregular Mask Dataset from their website.

For Structure image of datasets, we follow the structure flow and utlize the RTV smooth method.Run generation function data/Matlab/generate_structre_images.m in your matlab. For example, if you want to generate smooth images for Places2, you can run the following code:

generate_structure_images("path to Places2 dataset root", "path to output folder");

Training New Models

# To train on the you dataset, for example.
python train.py --st_root=[the path of structure images] --de_root=[the path of ground truth images] --mask_root=[the path of mask images]

There are many options you can specify. Please use python train.py --help or see the options

For the current version, the batchsize needs to be set to 1.

To log training, use --./logs for Tensorboard. The logs are stored at logs/[name].

Code Structure

  • train.py: the entry point for training.
  • models/networks.py: defines the architecture of all models
  • options/: creates option lists using argparse package. More individuals are dynamically added in other files as well.
  • data/: process the dataset before passing to the network.
  • models/encoder.py: defines the encoder.
  • models/decoder.py: defines the decoder.
  • models/PCconv.py: defines the Multiscale Partial Conv, feature equalizations and two branch.
  • models/MEDFE.py: defines the loss, model, optimizetion, foward, backward and others.

Pre-trained weights and test model

There are three folders to present pre-trained for three datasets respectively, for the celeba, we olny use the centering masks. For the Places2, the pre-trained model just suit to natural images. You can download the pre-trained model here. The demo will coming soon. I will re-train our model and update the parameters soon.

About Feature equalizations

I think the feature equalizations may can be utlized in many tasks to replace the traditional attention block (None local/CBAM). I didn't try because of lack of time,I hope someone can try the method and communicate with me.

Citation

If you use this code for your research, please cite our papers.

@inproceedings{Liu2019MEDFE,
  title={Rethinking Image Inpainting via a Mutual Encoder-Decoder with Feature Equalizations},
  author={Hongyu Liu, Bin Jiang, Yibing Song, Wei Huang, and Chao Yang,},
  booktitle={Proceedings of the European Conference on Computer Vision},
  year={2020}
}

About

Rethinking Image Inpainting via a Mutual Encoder Decoder with Feature Equalizations. ECCV 2020 Oral

Topics

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Contributors 4

  •  
  •  
  •  
  •  

Languages