By Xiaoyu Xiang, Ding Liu, Xiao Yang, Yiheng Zhu, Xiaohui Shen, Jan P. Allebach
This is the official Pytorch implementation of Adversarial Open Domain Adaptation for Sketch-to-Photo Synthesis.
- Our paper will be presented on WACV-2022 on Jan 5, 19:30 pm GMT-10. Welcome to come and ask questions!
- 2021.12.26: Edit some comments of the code.
- 2021.12.25: Upload all codes. Merry Christmas!
- 2021.12.21: Update the LICENSE and repo contents.
- 2021.4.15: Create the repo
The repository contains the entire project (including all the util scripts) for our open domain sketch-to-photo synthesis network, AODA.
AODA aims to synthesize a realistic photo from a freehand sketch with its class label, even if the sketches of that class are missing in the training data. It is accepted by WACV-2022 and CVPR Workshop-2021. The most updated paper with supplementary materials can be found at arXiv.
In AODA, we propose a simple yet effective open-domain sampling and optimization strategy to "fool" the generator into treating fake sketches as real ones. To achieve this goal, we adopt a framework that jointly learns sketch-to-photo and photo-to-sketch generation. Our approach shows impressive results in synthesizing realistic color, texture, and maintaining the geometric composition for various categories of open-domain sketches.
If our proposed architectures also help your research, please consider citing our paper.
- Linux or macOS
- Python 3 (Recommend to use Anaconda)
- CPU or NVIDIA GPU + CUDA CuDNN
First, clone this repository:
git clone https://github.com/Mukosame/AODA.git
Install the required packages: pip install -r requirements.txt
.
There are three datasets used in this paper: Scribble, SketchyCOCO, and QMUL-Sketch:
Scribble:
wget -N "http://www.robots.ox.ac.uk/~arnabg/scribble_dataset.zip"
SketchyCOCO:
Download from Google Drive.
QMUL-Sketch:
This dataset includes three datasets: handbags with 400 photos and sketches, ShoeV2 with 2000 photos and 6648 sketches, and ChairV2 with 400 photos and 1297 sketches. The complete dataset can be downloaded through Google Drive.
Train an AODA model:
python train.py --dataroot ./dataset/scribble_10class_open/ \
--name scribble_aoda \
--model aoda_gan \
--gan_mode vanilla \
--no_dropout \
--n_classes 10 \
--direction BtoA \
--load_size 260
After training, your models models/latest_net_G_A.pth
, models/latest_net_G_B.pth
and its training states states/latest.state
, and a corresponding log file train_scribble_aoda_xxx
are placed in the directory of ./checkpoints/scribble_aoda/
.
Please download the weights from [GoogleDrive], and put it into the weights/ folder.
You can switch the --model_suffix
to control the direction of sketch-to-photo or photo-to-sketch synthesis. For different datasets, you need to change the --name
and the corresponding --n_classes
:
python test.py --model_suffix _B --dataroot ./dataset/scribble/testA --name scribble_aoda --model test --phase test --no_dropout --n_classes 10
Your test results will be saved at ./results/test_latest/
.
You can also leave your questions as issues in the repository. I will be glad to answer them!
This project is released under the BSD-3 Clause License.
@inproceedings{xiang2022adversarial,
title={Adversarial Open Domain Adaptation for Sketch-to-Photo Synthesis},
author={Xiang, Xiaoyu and Liu, Ding and Yang, Xiao and Zhu, Yiheng and Shen, Xiaohui and Allebach, Jan P},
booktitle={Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision},
year={2022}
}
This project is based on the CycleGAN PyTorch.