Authors: Yuan-kui Li, Yun-Hsuan Lien, Yu-Shuen Wang - National Yang Ming Chiao Tung University
Abstract: In this study, we present a colorization network that generates flat-color icons according to given sketches and semantic colorization styles. Specifically, our network contains a style-structure disentangled colorization module and a normalizing flow. The colorization module transforms a paired sketch image and style image into a flat-color icon. To enhance network generalization and the quality of icons, we present a pixel-wise decoder, a global style code, and a contour loss to reduce color gradients at flat regions and increase color discontinuity at boundaries. The normalizing flow maps Gaussian vectors to diverse style codes conditioned on the given semantic colorization label. This conditional sampling enables users to control attributes and obtain diverse colorization results. Compared to previous colorization methods built upon conditional generative adversarial networks, our approach enjoys the advantages of both high image quality and diversity. To evaluate its effectiveness, we compared the flat-color icons generated by our approach and recent colorization and image-to-image translation methods on various conditions. Experiment results verify that our method outperforms state-of-the-arts qualitatively and quantitatively.
git clone https://github.com/djosix/IconFlow.git
cd IconFlow
Install requirements:
numpy==1.21.1
Pillow==8.3.1
torch==1.9.0
torchvision==0.10.0
torchdiffeq==0.2.2
scikit-image==0.18.2
opencv-python==4.5.3.56
tensorboard==2.5.0
fire==0.4.0
Download dataset.zip and unzip:
unzip -d dataset dataset.zip
Preprocess:
python3 -m iconflow.utils.dataset preprocess_images --dataset-dir dataset --resolutions '128,512' --num-workers 16
Now the directory layout looks like
IconFlow/
├── iconflow/ source code
├── dataset/
│ ├── raw/ 512x512 *.png raw icon images (dataset.zip)
│ ├── ColorImageScale.pkl color image scale information (dataset.zip)
│ └── data/ (generated)
│ ├── 128/
│ │ ├── contour/ 128x128 *.png icon contours
│ │ └── img/ 128x128 *.png icons
│ └── 512/
│ ├── contour/ 512x512 *.png icon contours
│ └── img/ 512x512 *.png icons
└── ...
- Train the reference-based colorizer:
python3 -m iconflow train_net --device cuda
- Train c-CNF:
python3 -m iconflow train_flow --device cuda
- Train the upsampler (128x128 to 512x512):
python3 -m iconflow train_up --device cuda --image-size 512
The default output directory is output
.
IconFlow/
├── dataset/
├── iconflow/
├── output/
│ ├── checkpoint.pt
│ ├── events.out.tfevents.*
│ ├── flow/
│ | ├── checkpoint.pt
| | └── events.out.tfevents.*
│ └── up_512/
│ ├── checkpoint.pt
| └── events.out.tfevents.*
└── ...
Use TensorBoard to monitor training progress:
tensorboard --logdir ./output
Additional requirements:
wxPython==4.1.1
scikit-learn==0.24.2
After training you can run:
python3 -m app
Weights are loaded from the checkpoint files in the output
directory. Pretrained checkpoints are also available in output.zip.
unzip -d output output.zip
@inproceedings{li2022style,
title={Style-Structure Disentangled Features and Normalizing Flows for Diverse Icon Colorization},
author={Li, Yuan-kui and Lien, Yun-Hsuan and Wang, Yu-Shuen},
booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition},
pages={11244--11253},
year={2022}
}