This work is accepted in CVPR2021 as Poster. It proposed a new video inpainting approach that combines temporal convolution as well as optical flow approach.
Noted: This code is currently a beta version. Not gurantee to be fully correct.
Optical Flow Davis | Optical Flow FVI | Mask Davis | Mask FVI | Checkpoint
torch==1.7.0
torchvision==0.8.1
For FVI dataset, please refer to https://github.com/amjltc295/Free-Form-Video-Inpainting. For DAVIS dataset, please refer to https://davischallenge.org/.
TSAM
└── data
├── checkpoints
├── model_weights
├── results
├── FVI
├── DAVIS
└── runs
└── code
└── master
└── TSAM
└── ...
Pretrained weights: download all the pretrained weights and put it under TSAM/data/model_weights
Model Name | |
---|---|
TSM_imagenet_resent50_gated.pth | weight |
TSM_imagenet_resent50.pth | weight |
FVI TSM moving object/curve masks:
CUDA_VISIBLE_DEVICES=0,1,2,3 python3 train.py --config config/config_pretrain.json --dataset_config dataset_configs/FVI_all_masks.json
CUDA_VISIBLE_DEVICES=0,1,2,3 python3 train.py --config config/config_finetune.json --dataset_config dataset_configs/FVI_all_masks.json
Change the train.py in training scripts to test.py, and add -p /pth/to/ckpt
to the end.
DAVIS TSAM object removal:
CUDA_VISIBLE_DEVICES=0 python3 test.py --config config/config_finetune_davis.json --dataset_config dataset_configs/DAVIS_removal.json -p /pth/to/ckpt
@inproceedings{zou2020progressive,
title={Progressive Temporal Feature Alignment Network for Video Inpainting},
author={Xueyan Zou and Linjie Yang and Ding Liu and Yong Jae Lee},
booktitle={CVPR},
year={2021}
}
Part of the code is borrow from https://github.com/amjltc295/Free-Form-Video-Inpainting and https://github.com/researchmm/STTN. Thanks for their great works!