Skip to content

The authors official implementation of Unsupervised Discovery of Interpretable Directions in the GAN Latent Space

Notifications You must be signed in to change notification settings

anvoynov/GANLatentDiscovery

Repository files navigation

Unsupervised Discovery of Interpretable Directions in the GAN Latent Space

Authors official implementation of the Unsupervised Discovery of Interpretable Directions in the GAN Latent Space (ICML 2020).

This code explores interpretable latent space directions of a pretrained GAN.

An image Our approach scheme: latent deformator A aims to produce shifts that are easy to distinguish for the reconstructor R

Here are several examples for Spectal Norm GAN (MNIST & Anime Faces), ProgGAN (CelebA-HQ) and BigGAN (ILSVRC): An image

Requirements

python 3.6 or later
jupyter (for visualization)

torch>=1.4
torchvision
tqdm
tensorboardX

see requirement.txt for exact authors environment.

Training

Here is a minimal example of latent rectification run command:

python run_train.py \
    --gan_type BigGAN \
    --gan_weights models/pretrained/generators/BigGAN/G_ema.pth \
    --deformator ortho \
    --out rectification_results_dir

this script will save the latent space directions stored in LatentDeformator module weights.
It also saves images charts with latent directions examples. gan_type specifies the generator model. Take into consideration model-specific parameters for StyleGAN2 (gan_resolution, w_shift) and BigGAN (target_class).

Note that you can pass as an argument any parameter of Params class defined in trainer.py

Evaluation

Run evaluation.ipynb notebook for the discovered directions inspection.

Pre-trained Models

Run python download.py to download all pretrained generators and latent directions. We also add human_annotation.txt file with annotation of some of directions.

The pretrained models are the unchanged copies from the following sources: 100_celeb_hq_network-snapshot-010403.pth from https://github.com/ptrblck/prog_gans_pytorch_inference G_ema.pth from https://github.com/ajbrock/BigGAN-PyTorch and stylegan2-ffhq-config-f.pkl https://github.com/NVlabs/stylegan2 converted with https://github.com/rosinality/stylegan2-pytorch

Results

Here are some examples of generated images manipulation by moving along discovered directions:

An image

StyleGAN2 - FFHQ - opened eyes

An image

BigBiGAN - ImageNet - light direction

An image

BigGAN - ImageNet - rotation

Citation

@inproceedings{voynov2020unsupervised,
  title={Unsupervised discovery of interpretable directions in the gan latent space},
  author={Voynov, Andrey and Babenko, Artem},
  booktitle={International Conference on Machine Learning},
  pages={9786--9796},
  year={2020},
  organization={PMLR}
}

Credits

BigGAN code and weights are based on the authors implementation: https://github.com/ajbrock/BigGAN-PyTorch

ProgGAN code and weights are based on: https://github.com/ptrblck/prog_gans_pytorch_inference

U-net segmentation model code is based on: https://github.com/milesial/Pytorch-UNet

About

The authors official implementation of Unsupervised Discovery of Interpretable Directions in the GAN Latent Space

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published