This is a repo for the paper: MemeCap: A Dataset for Captioning and Interpreting Memes
We used 10% of training data as validation data.
Images are available here: link
We used OpenFlamingo-9B and MiniGPT4 models for our zero-shot and few-shot experiments.
Check out their code base for the model implementation.
If the code is helpful for your project, please cite our paper (Bibtex below).
@misc{hwang2023memecap,
title={MemeCap: A Dataset for Captioning and Interpreting Memes},
author={EunJeong Hwang and Vered Shwartz},
year={2023},
eprint={2305.13703},
archivePrefix={arXiv},
primaryClass={cs.CL}
}