Culture-inspired Multi-modal Color Palette Generation and Colorization: A Chinese Youth Subculture Case
This repository is the official implementation of Culture-inspired Multi-modal Color Palette Generation and Colorization: A Chinese Youth Subculture Case
.
Presented at The 3rd IEEE Workshop on
Artificial Intelligence for Art Creation.
Links: paper | video
(now the paper has not been published yet, the link will be added later)
Subcultural youth groups in China have their own unique color style. For example, in traditional Chinese color concepts, the combination of red and green is unpleasant (there is a proverb: '红配绿,赛狗屁','red with green, sick the dog'). However, this same color combination represents a cool and rebellious style for Chinese Youth Subculture (CYS) groups as illustrated by a number of posters found in popular CYS websites.
In order to study this unique color style and create an intelligent tool of palette generation and colorizaiton for CYS groups, we started this project.
CYS color dataset contains 1263 images with corresponding 5-color palette, descriptive text, and category. The figure above shows five examples from the dataset.
We find 100 samples of our dataset here and the full version will be available later.
Our framework includes two separately trained networks: the color palette generation network
and the colorization network
.
The first network is trained through a conditional GAN (cGAN) with a multi-modal input to generate CYS color palettes.
Another cGAN in the second model is trained to color the input images according to the palette generated by the first network.
We have developed a demo system to materialize our framework, where users can obtain a image that is colored with the CYS style by three steps: palette generation, color adjustment and colorizaton.
You can find the video of our demo at YouTube.
And we use the demo system to generate some sample results:
Make sure you have installed Python >= 3.6
.
Prepare virtual environment and install packages.
python3 -m venv venv
source venv/bin/activate
pip install -r requirements.txt
As mentioned in Framework, there are 2 cGAN models in the framework. Different checkpoint need to be downloaded for each model: text2palette pre-tained model and colorization pre-trained model.
Now change the configuration for Streamlit demo in config.ini
. Change parameters under streamlit
to the path where you store the downloaded checkpoints.
...
[streamlit]
t2p_ckpt_path = /PATH/TO/TEXT2PALETTE
col_ckpt_path = /PATH/TO/COLORIZATION
Then run the demo by:
streamlit run st_demo.py
Now you can visit http://localhost:8501
to play with pre-trained demo.
Prepare training data
We have implemented all you need in preprocess step in data_preprocess.py
.
-
Change the paths to correct ones and run
download_image_and_write_csv()
. -
If everything goes right, you will find images in
./data/images
and a csv file with different first column in./data
namedpreprocessed_data.csv
. -
Then run
augment_preprocess_data()
and you will get a csv file namedaugment_data.csv
.
Change configuration in config.ini
as you need.
Notice: [text2palette] is for the color palette generation network and [colorization] is for the colorization network.
Detailed description of every fields:
batch_size
- batch size of training dataset
learning_rate
- learning rate of optimizer
beta_1
- parameter of Adam optimizer
max_iteration_number
- total steps of training
print_every
- print results after print_every
step
checkpoint_every
- save checkpoint after checkpoint_every
step
checkpoint_max_to_keep
- the maximum number of checkpoint stored on your machine, old ones will be deleted
checkpoint_dir
- the folder where you want to save your checkpoints
sample_dir
- the folder where you want to save your sample generated during training
z_dim
- dimension of noise of GAN
Start Training
Run text2palatte_pipeline.py
for training color palette generation network
python text2palette_pipeline.py --train
and colorization_pipeline.py
for training the colorization network.
python colorization_pipeline.py --train