Skip to content

ZhentingWang/RONAN

Repository files navigation

RONAN

This repository is the source code for "Where Did I Come From? Origin Attribution of AI-Generated Images".

Image generation techniques have been gaining increasing attention recently, but concerns have been raised about the potential misuse and intellectual property (IP) infringement associated with image generation models. It is, therefore, necessary to analyze the origin of images by inferring if a specific image was generated by a particular model, i.e., origin attribution. Existing methods only focus on specific types of generative models and require additional procedures during the training phase or generation phase. This makes them unsuitable for pre-trained models that lack these specific operations and may impair generation quality. To address this problem, we first develop an alteration-free and model-agnostic origin attribution method via reverse-engineering on image generation models, i.e., inverting the input of a particular model for a specific image.

Assuming we have an inspected model $\mathcal{M}_1$, our origin attribution algorithm's objective is to flag an image as belonging of model $\mathcal{M}_1$ if it was generated by that model. On the other hand, the algorithm should consider the image as non-belonging if it was created by other models (e.g., $\mathcal{M}_2$ in the following figure) or if it is a real image.

News

Please check our further work for improving the origin attribution performance on the state-of-the-art latent generative models ("How to Trace Latent Generative Model Generated Images without Artificial Watermark?").

Environment

see requirements.txt

Pre-trained Generative Models

For DCGAN, the pretrained model can be found in "dcgan_weights". Regarding the consistency model, pretrained models can be downloaded from https://github.com/openai/consistency_models; they are not directly uploaded in this repository due to their large file sizes. The remaining pretrained models will be downloaded automatically.

Data

Reverse-engineering

To conduct reverse-engineering on the belonging images:

python main.py --model_type styleganv2ada_cifar10 --input_selection use_generated_image0 --distance_metric l2 --bs 1 --num_iter 1500 --strategy min --lr 0.1

To conduct reverse-engineering on the non-belonging images (real images):

python main.py --model_type styleganv2ada_cifar10 --input_selection use_cifar10_image0 --distance_metric l2 --bs 1 --num_iter 1500 --strategy min --lr 0.1

To conduct reverse-engineering on the non-belonging images (images generated by other models):

python main.py --model_type styleganv2ada_cifar10 --input_selection_model_type dcgan_cifar10 --distance_metric l2 --bs 1 --num_iter 1500 --strategy min --lr 0.1

Cite this work

You are encouraged to cite the following paper if you use the repo for academic research.

@inproceedings{wang2023did,
  title={Where Did I Come From? Origin Attribution of AI-Generated Images},
  author={Wang, Zhenting and Chen, Chen and Zeng, Yi and Lyu, Lingjuan and Ma, Shiqing},
  booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
  year={2023}
}

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published