(13/06/2023)
- Release the pretraining scripts for ROSITA.
(24/08/2021)
- Release the demo to perform fine-grained semantic alignments using the pretrained ROSITA model.
(15/08/2021)
- Release the basic framework for ROSITA, including the pretrained base ROSITA model, as well as the scripts to run the fine-tuning and evaluation on three downstream tasks (i.e., VQA, REC, ITR) over six datasets.
This repository contains source code necessary to reproduce the results presented in our ACM MM paper ROSITA: Enhancing Vision-and-Language Semantic Alignments via Cross- and Intra-modal Knowledge Integration, which encodes the cROSs- and InTrA-model prior knowledge in a in a unified scene graph to perform knowledge-guided vision-and-language pretraining. Compared with existing counterparts, ROSITA learns better fine-grained semantic alignments across different modalities, thus improving the capability of the pretrained model.
We compare ROSITA against existing state-of-the-art VLP methods on three downstream tasks. All methods use the base model of Transformer for a fair comparison. The trained checkpoints to reproduce these results are provided in finetune.md.
Tasks | VQA | REC | ITR | |||||
---|---|---|---|---|---|---|---|---|
Datasets | VQAv2 dev | std |
RefCOCO val | testA | testB |
RefCOCO+ val | testA | testB |
RefCOCOg val | test |
IR-COCO R@1 | R@5 | R@10 |
TR-COCO R@1 | R@5 | R@10 |
IR-Flickr R@1 | R@5 | R@10 |
TR-Flickr R@1 | R@5 | R@10 |
ROSITA | 73.91 | 73.97 | 84.79 | 87.99 | 78.28 | 76.06 | 82.01 | 67.40 | 78.23 | 78.25 | 54.40 | 80.92 | 88.60 | 71.26 | 91.62 | 95.58 | 74.08 | 92.44 | 96.08 | 88.90 | 98.10 | 99.30 |
SoTA-base | 73.59 | 73.67 | 81.56 | 87.40 | 74.48 | 76.05 | 81.65 | 65.70 | 75.90 | 75.93 | 54.00 | 80.80 | 88.50 | 70.00 | 91.10 | 95.50 | 74.74 | 92.86 | 95.82 | 86.60 | 97.90 | 99.20 |
We recommand a workstation with 4 GPU (>= 24GB, e.g., RTX 3090 or V100), 120GB memory and 50GB free disk space. We strongly recommend to use a SSD drive to guarantee high-speed I/O. Also, you should first install some necessary package as follows:
- Python >= 3.6
- PyTorch >= 1.4 with Cuda >=10.2
- torchvision >= 0.5.0
- Cython
- Apex@0c7d8e3
# git clone
$ git clone https://github.com/MILVLG/rosita.git
# build essential utils
$ cd rosita/rosita/utils/rec
$ python setup.py build
$ cp build/lib*/bbox.cpython*.so .
# build apex@0c7d8e3
$ git clone https://github.com/NVIDIA/apex.git
$ cd apex
$ git checkout 0c7d8e3
$ python setup.py install
To download the required datasets to run this project, please check datasets.md for details.
Please check pretrain.md for the details for ROSITA pretraining. Script to run pretraining is provided.
Please check finetune.md for the details for finetuning on downstream tasks. Scripts to run finetuning on downstream tasks are provided. Also, we provide trained models that can be directly evaluated to reproduce the results.
We provide the Jupyter notebook scripts for reproducing the visualization results shown in our paper.
We appreciate the well-known open-source projects such as LXMERT, UNITER, OSCAR, and Huggingface, which help us a lot when writing our codes.
Yuhao Cui (@cuiyuhao1996) and Tong-An Luo (@Zoroaster97) are the main contributors to this repository. Please kindly contact them if you find any issue.
Please consider citing this paper if you use the code:
@inProceedings{cui2021rosita,
title={ROSITA: Enhancing Vision-and-Language Semantic Alignments via Cross- and Intra-modal Knowledge Integration},
author={Cui, Yuhao and Yu, Zhou and Wang, Chunqi and Zhao, Zhongzhou and Zhang, Ji and Wang, Meng and Yu, Jun},
booktitle={Proceedings of the 29th ACM International Conference on Multimedia},
year={2021}
}