This is the official repo of the paper Understanding and Evaluating Human Preferences for AI Generated Images with Instruction Tuning
Abstract: Artificial Intelligence Generated Content (AIGC) has grown rapidly in recent years, among which AI-based image generation has gained widespread attention due to its efficient and imaginative image creation ability. However, AI-generated Images (AIGIs) may not satisfy human preferences due to their unique distortions, which highlights the necessity to understand and evaluate human preferences for AIGIs. To this end, in this paper, we first establish a novel Image Quality Assessment (IQA) database for AIGIs, termed AIGCIQA2023+, which provides human visual preference scores and detailed preference explanations from three perspectives including quality, authenticity, and correspondence. Then, based on the constructed AIGCIQA2023+ database, this paper presents a MINT-IQA model to evaluate and explain human preferences for AIGIs from Multi-perspectives with INstruction Tuning. Specifically, the MINT-IQA model first learn and evaluate human preferences for AI-generated Images from multi-perspectives, then via the vision-language instruction tuning strategy, MINT-IQA attains powerful understanding and explanation ability for human visual preference on AIGIs, which can be used for feedback to further improve the assessment capabilities. Extensive experimental results demonstrate that the proposed MINT-IQA model achieves state-of-the-art performance in understanding and evaluating human visual preferences for AIGIs, and the proposed model also achieves competing results on traditional IQA tasks compared with state-of-the-art IQA models. The AIGCIQA2023+ database and MINT-IQA model will be released to facilitate future research.
The constructed AIGCIQA2023 database can be accessed using the links below. Download AIGCIQA2023 database:[百度网盘 (提取码:q9dt)], [Terabox]
The mapping relationship between MOS points and filenames are as follows:
mosz1: Quality
mosz2: Authenticity
mosz3: Correspondence
Clone this repository:
git clone https://github.com/wangjiarui153/MINT-IQAL.git
Create a conda virtual environment and activate it:
conda create -n MINTIQA python=3.8
conda activate MINTIQA
Install dependencies using requirements.txt:
pip install -r requirements.txt
The codes and inference weights can be downloaded from 链接:https://pan.baidu.com/s/1dJNN9sL-cPytOm8vjEDEHQ 提取码:k2vf
The Database is in: https://github.com/wangjiarui153/AIGCIQA2023
Set img_path in inference.py line29 Set the corresponding prompt to the image in inference.py line31 file setting in config/options_infer.py
python inference.py
- ✅ Release the AIGCIQA2023 database
- ✅ Release the Inference code (stage1 and stage2)
- Release the training code (stage1 and stage2)
If you have any inquiries, please don't hesitate to reach out via email at wangjiarui@sjtu.edu.cn
If you find MINT-IQA is helpful, please cite:
@misc{wang2024understandingevaluatinghumanpreferences,
title={Understanding and Evaluating Human Preferences for AI Generated Images with Instruction Tuning},
author={Jiarui Wang and Huiyu Duan and Guangtao Zhai and Xiongkuo Min},
year={2024},
eprint={2405.07346},
archivePrefix={arXiv},
primaryClass={cs.CV},
url={https://arxiv.org/abs/2405.07346},
}