This is the implementation for the paper:
Few-shot Font Style Transfer between Different Languages
in Proc. of Winter Applications on Computer Vision WACV’21, Jan., 2021. Paper.
- Linux
- CPU or NVIDIA GPU + CUDA CuDNN
- Python 3
- torch>=0.4.1
- torchvision>=0.2.1
- dominate>=2.3.1
- visdom>=0.1.8.3
-
Download the dataset.
-
Unzip it to ./datasets/
-
To view training results and loss plots, run
python -m visdom.server
and click the URL http://localhost:8097. -
Train the model
bash ./train.sh
- Test
bash ./test.sh
- Evaluate
bash ./evaluate.sh
- scripts.sh integrate train.sh, test.sh, and evaluate.sh
bash ./scripts.sh
Code derived and reshaped from: