1Tencent, 2PKU, 2NUS, 2SEU, 2NJU
⚡We will actively maintain this repository and incorporate new research as it emerges. If you have any questions, please contact swordli@tencent.com. Welcome to collaborate on academic research and writing papers together.
Multimodal Large Language Models (MLLMs) are gaining increasing popularity in both academia and industry due to their remarkable performance in various applications such as visual question answering, visual perception, understanding, and reasoning. Over the past few years, significant efforts have been made to examine MLLMs from multiple perspectives. This paper presents a comprehensive review of 200+ benchmarks and evaluations for MLLMs, focusing on (1)perception and understanding, (2)cognition and reasoning, (3)specific domains, (4)key capabilities, and (5)other modalities. Finally, we discuss the limitations of the current evaluation methods for MLLMs and explore promising future directions. Our key argument is that evaluation should be regarded as a crucial discipline to better support the development of MLLMs.
- MDVP-Bench "Draw-and-Understand: Leveraging Visual Prompts to Enable MLLMs to Comprehend What You Want". Lin W, Wei X, An R, et al.. arXiv 2024. [Paper] [Github].
- ChEF "CHEF: A COMPREHENSIVE EVALUATION FRAMEWORK FOR STANDARDIZED ASSESSMENT OF MULTIMODAL LARGE LANGUAGE MODELS". Shi Z, Wang Z, Fan H, et al.. arXiv 2023. [Paper] [Github].
- UniBench "UniBench: Visual Reasoning Requires Rethinking Vision-Language Beyond Scaling". Al-Tahan H, Garrido Q, Balestriero R, et al.. arXiv 2024. [Paper] [Github].
- MME "MME: A Comprehensive Evaluation Benchmark for Multimodal Large Language Models". Fu C, Chen P, Shen Y, et al.. arXiv 2024. [Paper] [Github].
- MM-Vet "MM-Vet: Evaluating Large Multimodal Models for Integrated Capabilities". Yu W, Yang Z, Li L, et al.. arXiv 2023. [Paper] [Github].
- TouchStone "TouchStone: Evaluating Vision-Language Models by Language Models". Bai S, Yang S, Bai J, et al.. arXiv 2023. [Paper] [Github].
- MMBench "MMBench: Is Your Multi-modal Model an All-around Player?". Liu Y, Duan H, Zhang Y, et al.. arXiv 2024. [Paper] [Github].
- OwlEval "mPLUG-Owl: Modularization Empowers Large Language Models with Multimodality". Ye Q, Xu H, Xu G, et al.. arXiv 2024. [Paper] [Github].
- Open-VQA "What Matters in Training a GPT4-Style Language Model with Multimodal Inputs?". Zeng Y, Zhang H, Zheng J, et al.. arXiv 2023. [Paper] [Github].
- SEED-Bench "SEED-Bench: Benchmarking Multimodal LLMs with Generative Comprehension". Li B, Wang R, Wang G, et al.. arXiv 2023. [Paper] [Github].
- SEED-Bench-2 "SEED-Bench-2: Benchmarking Multimodal Large Language Models". Li B, Ge Y, Ge Y, et al.. arXiv 2023. [Paper] [Github].
- LLaVA-Bench "Visual Instruction Tuning". Liu H, Li C, Wu Q, et al.. arXiv 2023. [Paper] [Github].
- LAMM "LAMM: Language-Assisted Multi-Modal Instruction-Tuning Dataset, Framework, and Benchmark". Yin Z, Wang J, Cao J, et al.. arXiv 2023. [Paper] [Github].
Visual Grounding and Object Detection
- CODE "Contextual Object Detection with Multimodal Large Language Models". Zang Y, Li W, Han J, et al.. arXiv 2023. [Paper] [Github].
- Flickr30k Entities "Flickr30k Entities: Collecting Region-to-Phrase Correspondences for Richer Image-to-Sentence Models". Plummer B. A, Wang L, Cervantes C. M, et al.. arXiv 2016. [Paper] [Github].
- Visual7W "Visual7W: Grounded Question Answering in Images". Zhu Y, Groth O, Bernstein M, et al.. CVPR 2016. [Paper] [Github].
- V*Bench "V*: Guided Visual Search as a Core Mechanism in Multimodal LLMs". Wu P, Xie S, et al.. arXiv 2023. [Paper] [Github].
- Grounding-Bench "LLaVA-Grounding: Grounded Visual Chat with Large Multimodal Models". Zhang H, Li H, Li F, et al.. arXiv 2023. [Paper] [Github].
Fine-grained Identification and Recognition
- GVT-Bench "What Makes for Good Visual Tokenizers for Large Language Models?". Wang G, Ge Y,Ding X, et al.. arXiv 2023. [Paper] [Github].
- V* Bench "V*: Guided Visual Search as a Core Mechanism in Multimodal LLMs". Wu P, Xie S.. arXiv 2023. [Paper] [Github].
- MMVP "Eyes Wide Shut? Exploring the Visual Shortcomings of Multimodal LLMs". Tong S,Liu Z,Zhai Y, et al.. arXiv 2024. [Paper] [Github].
- CV-Bench "Cambrian-1: A Fully Open, Vision-Centric Exploration of Multimodal LLMs". Tong S,Brown E,Wu P, et al.. arXiv 2024. [Paper] [Github].
- P2GB "Plug-and-Play Grounding of Reasoning in Multimodal Large Language Models". Chen J, Liu Y, Li D, et al.. arXiv 2024. [Paper] [Github].
- Visual CoT "Visual CoT: Advancing Multi-Modal Language Models with a Comprehensive Dataset and Benchmark for Chain-of-Thought Reasoning". Shao H, Qian S, Xiao H, et al.. arXiv 2024. [Paper] [Github].
- MagnifierBench "OtterHD: A High-Resolution Multi-modality Model". Li B, Zhang P, Yang J, et al.. arXiv 2023. [Paper] [Github].
- HR-Bench "Divide, Conquer and Combine: A Training-Free Framework for High-Resolution Image Perception in Multimodal Large Language Models". Wang W, Ding L,Zeng M, et al.. arXiv 2024. [Paper] [Github].
- SPARK "SPARK: Multi-Vision Sensor Perception and Reasoning Benchmark for Large-scale Vision-Language Models". Yu Y,Chung S,Lee B, et al.. arXiv 2024. [Paper] [Github].
Nuanced Vision-language Alignment
- Eqben "Equivariant Similarity for Vision-Language Foundation Models". Wang T, Lin, K. Li L,Lin C, et al.. ICCV 2023. [Paper] [Github].
- SPEC "Asclepius: A Spectrum Evaluation Benchmark for Medical Multi-Modal Large Language Models". Wang W, Su Y, Huan Y, et al.. arXiv 2024. [Paper] [Github].
- VALSE "When and why vision-language models behave like bags-of-words, and what to do about it?". Yuksekgonul M, Bianchi F, Kalluri P, et al.. ICLR 2023. [Paper] [Github].
- VL-Checklist "VL-CheckList: Evaluating Pre-trained Vision-Language Models with Objects, Attributes and Relations". Zhao T, Zhang T, Zhu M, et al.. arXiv 2023. [Paper] [Github].
- Winoground "Winoground: Probing Vision and Language Models for Visio-Linguistic Compositionality". Thrush T, Jiang R, Bartolo M, et al.. CVPR 2022. [Paper] [Github].
- ARO "When and why vision-language models behave like bags-of-words, and what to do about it?". Yuksekgonul M, Bianchi F, Kalluri P, et al.. ICLR 2023. [Paper] [Github].
Multi-image Understanding
- Mementos "Mementos: A Comprehensive Benchmark for Multimodal Large Language Model Reasoning over Image Sequences". Wang X, Zhou Y, Liu X, et al.. arXiv 2024. [Paper] [Github].
- MileBench "MileBench: Benchmarking MLLMs in Long Context". Song D, Chen S, Chen G, et al.. arXiv 2024. [Paper] [Github].
- MuirBench "MuirBench: A Comprehensive Benchmark for Robust Multi-image Understanding". Wang F, Fu X, Huang J, et al.. arXiv 2024. [Paper] [Github].
- CompBench "CompBench: A Comparative Reasoning Benchmark for Multimodal LLMs". Kil J, Mai Z, Lee J, et al.. arXiv 2024. [Paper] [Github].
- MMIU "MMIU: Multimodal Multi-image Understanding for Evaluating Large Vision-Language Models". Meng F, Wang J, Li C, et al.. arXiv 2024. [Paper] [Github].
Implication Understanding
- II-Bench "II-Bench: An Image Implication Understanding Benchmark for Multimodal Large Language Models". Liu Z, Fang F, Feng X, et al.. arXiv 2024. [Paper] [Github].
- ImplicitAVE "ImplicitAVE: An Open-Source Dataset and Multimodal LLMs Benchmark for Implicit Attribute Value Extraction". Zou H, Samuel V, Zhou Y, et al.. ACL 2024. [Paper] [Github].
- FABA-Bench "Facial Affective Behavior Analysis with Instruction Tuning". Li Y, Dao A, Bao W, et al.. arXiv 2024. [Paper] [Github].
Image Quality and Aesthetics Perception
- AesBench "AesBench: An Expert Benchmark for Multimodal Large Language Models on Image Aesthetics Perception". Huang Y, Yuan Q, Sheng X, et al.. arXiv 2024. [Paper] [Github].
- UNIAA "UNIAA: A Unified Multi-modal Image Aesthetic Assessment Baseline and Benchmark". Zhou Z, Wang Q, Lin B, et al.. arXiv 2024. [Paper] [Github].
- DesignProbe "DesignProbe: A Graphic Design Benchmark for Multimodal Large Language Models". Lin J, Huang D, Zhao T, et al.. arXiv 2024. [Paper] [Github].
- Q-Bench "Q-Bench: A Benchmark for General-Purpose Foundation Models on Low-level Vision". Wu H, Zhang Z, Zhang E, et al.. arXiv 2024. [Paper] [Github].
- Q-Bench+ "A Benchmark for Multi-modal Foundation Models on Low-level Vision: from Single Images to Pairs". Zhang Z, Wu H, Zhang E, et al.. TPAMI. [Paper] [Github].
Visual Relation
- MMRel "MMRel: A Relation Understanding Dataset and Benchmark in the MLLM Era". Nie J, Zhang G, An W, et al.. arXiv 2024. [Paper] [Github].
- What’sUp "What's "up" with vision-language models? Investigating their struggle with spatial reasoning". Kamath A, Hessel J, Chang K. EMNLP 2023. [Paper] [Github].
- GSR-BENCH "GSR-BENCH: A Benchmark for Grounded Spatial Reasoning Evaluation via Multimodal LLMs". Rajabi N, Kosecka J. arXiv 2024. [Paper] [Github].
- CRPE "The All-Seeing Project V2: Towards General Relation Comprehension of the Open World". Wang W, Ren Y, Luo H, et al.. ECCV 2024. [Paper] [Github].
- VSR "Visual Spatial Reasoning". Liu F, Emerson G, Collier N, et al.. arXiv 2022. [Paper] [Github].
- SpatialRGPT "SpatialRGPT: Grounded Spatial Reasoning in Vision Language Model". Cheng A, Yin H, Fu Y, et al.. arXiv 2024. [Paper] [Github].
- MuCR "Multimodal Causal Reasoning Benchmark: Challenging Vision Large Language Models to Infer Causal Links Between Siamese Images". Li Z, Wang H, Liu D, et al.. arXiv 2024. [Paper] [Github].
Context-dependent Reasoning
- CODIS "CODIS: Benchmarking Context-Dependent Visual Comprehension for Multimodal Large Language Models". Luo F, Chen C, Wan Z, et al.. arXiv 2024. [Paper] [Github].
- CFMM "Eyes Can Deceive: Benchmarking Counterfactual Reasoning Abilities of Multi-modal Large Language Models". Li Y, Tian W, Jiao Y, et al.. arXiv 2024. [Paper] [Github].
- VL-ICLBench "VL-ICL Bench: The Devil in the Details of Benchmarking Multimodal In-Context Learning". Zong Y, Bohdal O, Hospedales T, et al.. arXiv 2023. [Paper] [Github].
CoT Reasoning
- SCIENCEQA "Learn to Explain: Multimodal Reasoning via Thought Chains for Science Question Answering". Lu P, Mishra S, Xia T, et al.. NIPS 2022. [Paper] [Github].
- VisualCoT "Visual CoT: Advancing Multi-Modal Language Models with a Comprehensive Dataset and Benchmark for Chain-of-Thought Reasoning". Shao H, Qian S, Xiao H, et al.. arXiv 2024. [Paper] [Github].
- M3CoT "M3CoT: A Novel Benchmark for Multi-Domain Multi-step Multi-modal Chain-of-Thought". Chen Q, Qin L, Zhang J, et al.. ACL2024. [Paper] [Github].
Vision-Indispensable Capabilities
- CLEVR "CLEVR: A Diagnostic Dataset for Compositional Language and Elementary Visual Reasoning". Johnson J, Hariharan B, Maaten L, et al.. arXiv 2016. [Paper] [Github].
- VQAv2 "Visually Dehallucinative Instruction Generation: Know What You Don't Know". Cha S, Lee J, Lee Y, et al.. arXiv 2024. [Paper] [Github].
- GQA "GQA: A New Dataset for Real-World Visual Reasoning and Compositional Question Answering". Hudson D, Manning C. CVPR 2019. [Paper] [Github].
- MMStar "Are We on the Right Way for Evaluating Large Vision-Language Models?". Chen L, Li J, Dong X, et al.. arXiv 2024. [Paper] [Github].
Knowledge-based Visual Question Answering
- KB-VQA "Explicit Knowledge-based Reasoning for Visual Question Answering". Wang P, Wu Q, Shen C, et al.. arXiv 2015. [Paper] [Github].
- FVQA "FVQA: Fact-based Visual Question Answering". Wang P, Wu Q, Shen C, et al.. arXiv 2016. [Paper] [Github].
- OK-VQA "OK-VQA: A Visual Question Answering Benchmark Requiring External Knowledge". Marino K, Rastegari M, Farhadi A, et al.. CVPR 2019. [Paper] [Github].
- A-OKVQA "A-OKVQA: A Benchmark for Visual Question Answering using World Knowledge". Schwenk D, Khandelwal A, Clark C, et al.. arXiv 2022. [Paper] [Github].
- SOK-Bench "SOK-Bench: A Situated Video Reasoning Benchmark with Aligned Open-World Knowledge". Wang A, Wu B, Chen S, et al.. CVPR 2024. [Paper] [Github].
Knowledge Editing
- MMEdit "Can We Edit Multimodal Large Language Models?". Cheng S, Tian B, Liu Q, et al.. EMNLP 2023. [Paper] [Github].
- MIKE "MIKE: A New Benchmark for Fine-grained Multimodal Entity Knowledge Editing". Li J, Du M, Zhang C, et al.. arXiv 2024. [Paper] [Github].
- VLKEB "VLKEB: A Large Vision-Language Model Knowledge Editing Benchmark". Huang H, Zhong H, Yu T, et al.. arXiv 2024. [Paper] [Github].
- MC-MKE "MC-MKE: A Fine-Grained Multimodal Knowledge Editing Benchmark Emphasizing Modality Consistency". Zhang J, Zhang H, Yin X, et al.. arXiv 2024. [Paper] [Github].
Intelligent Question Answering
- RAVEN "RAVEN: A Dataset for Relational and Analogical Visual rEasoNing". Zhang C, Gao F, Jia B, et al.. CVPR 2019. [Paper] [Github].
- MARVEL "MARVEL: Multidimensional Abstraction and Reasoning through Visual Evaluation and Learning". Jiang Y, Zhang J, Sun K, et al.. arXiv 2024. [Paper] [Github].
- VCog-Bench "What is the Visual Cognition Gap between Humans and Multimodal LLMs?". Cao X, Lai B, Ye W, et al.. arXiv 2024. [Paper] [Github].
- M3GIA "M3GIA: A Cognition Inspired Multilingual and Multimodal General Intelligence Ability Benchmark". Song W, Li Y, Xu J, et al.. arXiv 2024. [Paper] [Github].
Mathematical Question Answering
- MathVista "MathVista: Evaluating Mathematical Reasoning of Foundation Models in Visual Contexts". Lu P, Bansal H, Xia T, et al.. ICLR 2024. [Paper] [Github].
- MathVerse "MathVerse: Does Your Multi-modal LLM Truly See the Diagrams in Visual Math Problems?". ZhangR, JiangD, ZhangY, et al.. ECCV 2024. [Paper] [Github].
- NPHardEval4V "NPHardEval4V: A Dynamic Reasoning Benchmark of Multimodal Large Language Models". FanL, HuaW, LiX, et al.. arXiv 2024. [Paper] [Github].
- Math-Vision "Measuring Multimodal Mathematical Reasoning with MATH-Vision Dataset". WangK, PanJ, ShiW, et al.. arXiv 2024. [Paper] [Github].
- MATHCHECK-GEO "Is Your Model Really A Good Math Reasoner? Evaluating Mathematical Reasoning with Checklist". ZhouZ, LiuS, NingM, et al.. arXiv 2024. [Paper] [Github].
- Geometry3K "Inter-GPS: Interpretable Geometry Problem Solving with Formal Language and Symbolic Reasoning". LuP, GongR, JiangS, et al.. ACL 2021. [Paper] [Github].
Multidisciplinary Question Answering
- M3Exam "M3Exam: A Multilingual, Multimodal, Multilevel Benchmark for Examining Large Language Models". ZhangW, AljuniedS, GaoC, et al.. NeurIPS 2023. [Paper] [Github].
- CMMMU "CMMMU: A Chinese Massive Multi-discipline Multimodal Understanding Benchmark". ZhangG, DuX, ChenB, et al.. arXiv 2024. [Paper] [Github].
- ScienceQA "Learn to Explain: Multimodal Reasoning via Thought Chains for Science Question Answering". LuP, MishraS, XiaT, et al.. NeurIPS 2022. [Paper] [Github].
- MMMU "MMMU: A Massive Multi-discipline Multimodal Understanding and Reasoning Benchmark for Expert AGI". YueX, NiY, ZhangK, et al.. CVPR 2024. [Paper] [Github].
- CMMU "CMMU: A Benchmark for Chinese Multi-modal Multi-type Question Understanding and Reasoning". HeZ, WuX, ZhouP, et al.. arXiv 2024. [Paper] [Github].
- SceMQA "SceMQA: A Scientific College Entrance Level Multimodal Question Answering Benchmark". LiangZ, GuoK, LiuG, et al.. arXiv 2024. [Paper] [Github].
- MULTI "MULTI: Multimodal Understanding Leaderboard with Text and Images". ZhuZ, XuY, ChenL, et al.. arXiv 2024. [Paper] [Github].
Text-oriented Question Answering
- OCRBench "On the Hidden Mystery of OCR in Large Multimodal Models". LiuY, LiZ, HuangM, et al.. arXiv 2024. [Paper] [Github].
- P2GB "Plug-and-Play Grounding of Reasoning in Multimodal Large Language Models". ChenJ, LiuY, LiD, et al.. arXiv 2024. [Paper] [Github].
- TextVQA "Towards VQA Models That Can Read". SinghA, NatarajanV, ShahM, et al.. CVPR 2019. [Paper] [Github].
- TextCaps "TextCaps: a Dataset for Image Captioning with Reading Comprehension". SidorovO, HuR, RohrbachM, et al.. ECCV 2020. [Paper] [Github].
- SEED-Bench-2-Plus "SEED-Bench-2-Plus: Benchmarking Multimodal Large Language Models with Text-Rich Visual Comprehension". Bohao Li, Yuying Ge, Yi Chen, et al.. arXiv 2024. [Paper] [Github].
Document-oriented Question Answering
- SPDocVQA "Document Visual Question Answering Challenge 2020". Minesh Mathew, Ruben Tito, Dimosthenis Karatzas, et al.. DAS 2020. [Paper] [Github].
- MPDocVQA "Hierarchical multimodal transformers for Multi-Page DocVQA". Rubèn Tito, Dimosthenis Karatzas, Ernest Valveny. arXiv 2022. [Paper] [Github].
- InfographicVQA "Minesh Mathew and Viraj Bagal and Rubèn Pérez Tito and Dimosthenis Karatzas and Ernest Valveny and C. V Jawahar". Minesh Mathew, Viraj Bagal, Rubèn Pérez Tito, et al.. arXiv 2021. [Paper] [Github].
- DUDE "Document Understanding Dataset and Evaluation (DUDE)". Jordy Van Landeghem, Rubén Tito, Łukasz Borchmann, et al.. ICCV 2023. [Paper] [Github].
- MM-NIAH "Needle In A Multimodal Haystack". Weiyun Wang, Shuibo Zhang, Yiming Ren, et al.. arXiv 2024. [Paper] [Github].
Chart-oriented Question Answering
- ChartQA "ChartQA: A Benchmark for Question Answering about Charts with Visual and Logical Reasoning". Ahmed Masry, Do Xuan Long, Jia Qing Tan, et al.. ACL 2022. [Paper] [Github].
- ChartX "ChartX and ChartVLM: A Versatile Benchmark and Foundation Model for Complicated Chart Reasoning". Renqiu Xia, Bo Zhang, Hancheng Ye, et al.. arXiv 2024. [Paper] [Github].
- ChartBench "ChartBench: A Benchmark for Complex Visual Reasoning in Charts". Zhengzhuo Xu, Sinan Du, Yiyan Qi, et al.. arXiv 2023. [Paper] [Github].
- SciGraphQA "SciGraphQA: A Large-Scale Synthetic Multi-Turn Question-Answering Dataset for Scientific Graphs". Shengzhi Li, Nima Tajbakhsh. arXiv 2023. [Paper] [Github].
- MMC-Benchmark "MMC: Advancing Multimodal Chart Understanding with Large-scale Instruction Tuning". Fuxiao Liu, Xiaoyang Wang, Wenlin Yao, et al.. NAACL 2024. [Paper] [Github].
- CharXiv "CharXiv: Charting Gaps in Realistic Chart Understanding in Multimodal LLMs". Zirui Wang, Mengzhou Xia, Luxi He, et al.. arXiv 2024. [Paper] [Github].
- CHOPINLLM "On Pre-training of Multimodal Language Models Customized for Chart Understanding". Wan-Cyuan Fan, Yen-Chun Chen, Mengchen Liu, et al.. arXiv 2024. [Paper] [Github].
- SciFIBench "SciFIBench: Benchmarking Large Multimodal Models for Scientific Figure Interpretation". Jonathan Roberts, Kai Han, Neil Houlsby, et al.. arXiv 2024. [Paper] [Github].
Html-oriented Question Answering
- Web2Code "Web2Code: A Large-scale Webpage-to-Code Dataset and Evaluation Framework for Multimodal LLMs". Sukmin Yun, Haokun Lin, Rusiru Thushara, et al.. arXiv 2024. [Paper] [Github].
- VisualWebBench "VisualWebBench: How Far Have Multimodal LLMs Evolved in Web Page Understanding and Grounding?". Junpeng Liu, Yifan Song, Bill Yuchen Lin, et al.. arXiv 2024. [Paper] [Github].
- Plot2Code "Plot2Code: A Comprehensive Benchmark for Evaluating Multi-modal Large Language Models in Code Generation from Scientific Plots". Chengyue Wu, Yixiao Ge, Qiushan Guo, et al.. arXiv 2024. [Paper] [Github].
Embodied Decision-making
- VisualAgentBench "VisualAgentBench: Towards Large Multimodal Models as Visual Foundation Agents". Xiao Liu, Tianjie Zhang, Yu Gu, et al.. arXiv 2024. [Paper] [Github].
- EgoPlan-Bench "EgoPlan-Bench: Benchmarking Multimodal Large Language Models for Human-Level Planning". Yi Chen, Yuying Ge, Yixiao Ge, et al.. arXiv 2023. [Paper] [Github].
- PCA-EVAL "Towards End-to-End Embodied Decision Making via Multi-modal Large Language Model: Explorations with GPT4-Vision and Beyond". Liang Chen, Yichi Zhang, Shuhuai Ren, et al.. arXiv 2023. [Paper] [Github].
- OpenEQA "OpenEQA: Embodied Question Answering in the Era of Foundation Models". Majumdar, Arjun and Ajay, Anurag and Zhang, et al.. CVPR 2024. [Paper] [Github].
- OSWorld "OSWorld: Benchmarking Multimodal Agents for Open-Ended Tasks in Real Computer Environments". Tianbao Xie, Danyang Zhang, Jixuan Chenet al.. NeurIPS 2024. [Paper] [Github].
Mobile Agency
- Mobile-Eval "Mobile-Agent: Autonomous Multi-Modal Mobile Device Agent with Visual Perception". Junyang Wang, Haiyang Xu, Jiabo Ye, et al.. ICLR 2024. [Paper] [Github].
- Fereet-UI "Ferret-UI: Grounded Mobile UI Understanding with Multimodal LLMs". You K, Zhang H, Schoop E, et al.. arXiv 2024. [Paper] [Github].
- CRAB "CRAB: Cross-environment Agent Benchmark for Multimodal Language Model Agents". Tianqi Xu, Linyao Chen, Dai-Jie Wu, et al.. arXiv 2024. [Paper] [Github].
- CMMU "CMMU: A Benchmark for Chinese Multi-modal Multi-type Question Understanding and Reasoning". Zheqi He, Xinya Wu, Pengfei Zhou, et al.. arXiv 2024. [Paper] [Github].
- Henna "Peacock: A Family of Arabic Multimodal Large Language Models and Benchmarks". Fakhraddin Alwajih, El Moatez Billah Nagoudi, Gagan Bhatia, et al.. arXiv 2024. [Paper] [Github].
- LaVy-Bench "LaVy: Vietnamese Multimodal Large Language Model". Chi Tran, Huong Le Thanh. arXiv 2024. [Paper] [Github].
- MTVQA "MTVQA: Benchmarking Multilingual Text-Centric Visual Question Answering". Jingqun Tang, Qi Liu, Yongjie Ye, et al.. arXiv 2024. [Paper] [Github].
- CVQA "CVQA: Culturally-diverse Multilingual Visual Question Answering Benchmark". David Romero, Chenyang Lyu, Haryo Akbarianto Wibowo, et al.. arXiv 2024. [Paper] [Github].
- CMMMU "CMMMU: A Chinese Massive Multi-discipline Multimodal Understanding Benchmark". Ge Zhang, Xinrun Du, Bei Chen, et al.. arXiv 2024. [Paper] [Github].
- MULTI "MULTI: Multimodal Understanding Leaderboard with Text and Images". Zichen Zhu, Yang Xu, Lu Chen, et al.. arXiv 2024. [Paper] [Github].
Geography and Remote Sensing
- LHRS-Bench "LHRS-Bot: Empowering Remote Sensing with VGI-Enhanced Large Multimodal Language Model". Dilxat Muhtar, Zhenshi Li, Feng Gu, et al.. arXiv 2024. [Paper] [Github].
- ChartingNewTerritories "Charting New Territories: Exploring the Geographic and Geospatial Capabilities of Multimodal LLMs". Jonathan Roberts, Timo Lüddecke, Rehan Sheikh, et al.. arXiv 2023. [Paper] [Github].
Medicine
- GMAI-MMBench "GMAI-MMBench: A Comprehensive Multimodal Evaluation Benchmark Towards General Medical AI". Pengcheng Chen, Jin Ye, Guoan Wang, et al.. arXiv 2024. [Paper] [Github].
- M3D "M3DBench: Let's Instruct Large Models with Multi-modal 3D Prompts". Mingsheng Li, Xin Chen, Chi Zhang, et al.. arXiv 2023. [Paper] [Github].
- Asclepius "Asclepius: A Spectrum Evaluation Benchmark for Medical Multi-Modal Large Language Models". Wenxuan Wang, Yihang Su, Jingyuan Huan, et al.. arXiv 2024. [Paper] [Github].
- MultiMed "MultiMed: Massively Multimodal and Multitask Medical Understanding". Shentong Mo, Paul Pu Liang. arXiv 2024. [Paper] [Github].
Society
- VizWiz "VizWiz Grand Challenge: Answering Visual Questions from Blind People". Danna Gurari, Qing Li, Abigale J. Stangl, et al.. arXiv 2018. [Paper] [Github].
- MM-Soc "MM-Soc: Benchmarking Multimodal Large Language Models in Social Media Platforms". Yiqiao Jin, Minje Choi, Gaurav Verma, et al.. ACL 2024. [Paper] [Github].
- TransportationGames "TransportationGames: Benchmarking Transportation Knowledge of (Multimodal) Large Language Models". Xue Zhang, Xiangyu Shi, Xinyue Lou, et al.. arXiv 2024. [Paper] [Github].
Industry
- MMRo "MMRo: Are Multimodal LLMs Eligible as the Brain for In-Home Robotics?". Jinming Li, Yichen Zhu, Zhiyuan Xu, et al.. arXiv 2024. [Paper] [Github].
- DesignQA "DesignQA: A Multimodal Benchmark for Evaluating Large Language Models' Understanding of Engineering Documentation". Anna C. Doris, Daniele Grandi, Ryan Tomich, et al.. arXiv 2024. [Paper] [Github].
Autonomous Driving
- NuScenes-QA "NuScenes-QA: A Multi-modal Visual Question Answering Benchmark for Autonomous Driving Scenario". Tianwen Qian, Jingjing Chen, Linhai Zhuo, et al.. AAAI 2024. [Paper] [Github].
- DriveLM-DATA "DriveLM: Driving with Graph Visual Question Answering". Chonghao Sima, Katrin Renz, Kashyap Chitta, et al.. ECCV 2024. [Paper] [Github].
Long-context
- Mile-Bench "MileBench: Benchmarking MLLMs in Long Context". Song D, Chen S, Chen G H, et al.. arXiv 2024. [Paper] [Github].
- MMNeedle "Multimodal Needle in a Haystack: Benchmarking Long-Context Capability of Multimodal Large Language Models". Wang H, Shi H, Tan S, et al.. arXiv 2024. [Paper] [Github].
- MLVU "MLVU: A Comprehensive Benchmark for Multi-Task Long Video Understanding". Zhou J, Shu Y, Zhao B, et al.. arXiv 2024. [Paper] [Github].
Instruction Following
- CoIN "CoIN: A Benchmark of Continual Instruction tuNing for Multimodel Large Language Model". Chen C, Zhu J, Luo X, et al.. arXiv 2024. [Paper] [Github].
- MIA-Bench "MIA-Bench: Towards Better Instruction Following Evaluation of Multimodal LLMs". Qian Y, Ye H, Fauconnier J P, et al.. arXiv 2024. [Paper] [Github].
- DEMON "Fine-tuning Multimodal LLMs to Follow Zero-shot Demonstrative Instructions". Li J, Pan K, Ge Z, et al.. ICLR 2023. [Paper] [Github].
- VisIT-Bench "VisIT-Bench: A Benchmark for Vision-Language Instruction Following Inspired by Real-World Use". Bitton Y, Bansal H, Hessel J, et al.. NeurIPS 2023. [Paper] [Github].
- POPE "Evaluating Object Hallucination in Large Vision-Language Models". Li Y, Du Y, Zhou K, et al.. EMNLP 2023. [Paper] [Github].
- GAVIE "Mitigating Hallucination in Large Multi-Modal Models via Robust Instruction Tuning". Liu F, Lin K, Li L, et al.. ICLR 2023. [Paper] [Github].
- HaELM "Evaluation and Analysis of Hallucination in Large Vision-Language Models". Wang J, Zhou Y, Xu G, et al.. arXiv 2023. [Paper] [Github].
- M-HalDetect "Detecting and Preventing Hallucinations in Large Vision Language Models". Gunjal A, Yin J, Bas E.. AAAI 2024. [Paper] [Github].
- Bingo "Holistic Analysis of Hallucination in GPT-4V(ision): Bias and Interference Challenges". Cui C, Zhou Y, Yang X, et al.. arXiv 2023. [Paper] [Github].
- HallusionBench "HALLUSIONBENCH: An Advanced Diagnostic Suite for Entangled Language Hallucination and Visual Illusion in Large Vision-Language Models". Guan T, Liu F, Wu X, et al.. CVPR 2024. [Paper] [Github].
- VHTest "Visual Hallucinations of Multi-modal Large Language Models". Huang W, Liu H, Guo M, et al.. arXiv 2024. [Paper] [Github].
- CorrelationQA "The Instinctive Bias: Spurious Images lead to Hallucination in MLLMs". Han T, Lian Q, Pan R, et al.. arXiv 2024. [Paper] [Github].
- CHAIR "Object Hallucination in Image Captioning". Rohrbach A, Hendricks L A, Burns K, et al.. EMNLP 2018. [Paper] [Github].
- MHaluBench "Unified Hallucination Detection for Multimodal Large Language Models". Chen X, Wang C, Xue Y, et al.. arXiv 2024. [Paper] [Github].
- VideoHallucer "VideoHallucer: Evaluating Intrinsic and Extrinsic Hallucinations in Large Video-Language Models". Wang Y, Wang Y, Zhao D, et al.. arXiv 2024. [Paper] [Github].
- MMHAL-BENCH "Aligning Large Multimodal Models with Factually Augmented RLHF". Sun Z, Shen S, Cao S, et al.. arXiv 2023. [Paper] [Github].
- AMBER "AMBER: An LLM-free Multi-dimensional Benchmark for MLLMs Hallucination Evaluation". Wang J, Wang Y, Xu G, et al.. arXiv 2023. [Paper] [Github].
- MMECeption "GenCeption: Evaluate Multimodal LLMs with Unlabeled Unimodal Data". Cao L, Buchner V, Senane Z, et al.. arXiv 2024. [Paper] [Github].
Robustness
- MAD-Bench "How Easy is It to Fool Your Multimodal LLMs? An Empirical Analysis on Deceptive Prompts". Qian Y, Zhang H, Yang Y, et al.. arXiv 2024. [Paper] [Github].
- MMR "Seeing Clearly, Answering Incorrectly: A Multimodal Robustness Benchmark for Evaluating MLLMs on Leading Questions". Liu Y, Liang Z, Wang Y, et al.. arXiv 2024. [Paper] [Github].
- MM-SpuBench "MM-SpuBench: Towards Better Understanding of Spurious Biases in Multimodal LLMs". Ye W, Zheng G, Ma Y, et al.. arXiv 2024. [Paper] [Github].
- MM-SAP "MM-SAP: A Comprehensive Benchmark for Assessing Self-Awareness of Multimodal Large Language Models in Perception". Wang Y, Liao Y, Liu H, et al.. arXiv 2024. [Paper] [Github].
- BenchLMM "BenchLMM: Benchmarking Cross-style Visual Capability of Large Multimodal Models". Cai R, Song Z, Guan D, et al.. arXiv 2023. [Paper] [Github].
- VQAv2-IDK "Visually Dehallucinative Instruction Generation: Know What You Don’t Know". Cha S, Lee J, Lee Y, et al.. ICASSP 2024. [Paper] [Github].
Safety
- MMUBench "Single Image Unlearning: Efficient Machine Unlearning in Multimodal Large Language Models". Li J, Wei Q, Zhang C, et al.. arXiv 2024. [Paper] [Github].
- JailBreakV-28K "JailBreakV-28K: A Benchmark for Assessing the Robustness of MultiModal Large Language Models against Jailbreak Attacks". Luo W, Ma S, Liu X, et al.. arXiv 2024. [Paper] [Github].
- MultiTrust "Benchmarking Trustworthiness of Multimodal Large Language Models: A Comprehensive Study". Zhang Y, Huang Y, Sun Y, et al.. arXiv 2024. [Paper] [Github].
- MM-SafetyBench "MM-SafetyBench: A Benchmark for Safety Evaluation of Multimodal Large Language Models". Liu X, Zhu Y, Gu J, et al.. ECCV 2024. [Paper] [Github].
- SHIELD "SHIELD: An Evaluation Benchmark for Face Spoofing and Forgery Detection with Multimodal Large Language Models". Shi Y, Gao Y, Lai Y, et al.. arXiv 2024. [Paper] [Github].
- RTVLM "Red teaming visual language models". Li M, Li L, Yin Y, et al.. arXiv 2024. [Paper] [Github].
Temporal Perception
- MVBench "MVBench: A Comprehensive Multi-modal Video Understanding Benchmark". Li K, Wang Y, He Y, et al.. CVPR 2024. [Paper] [Github].
- TimeIT "Timechat: A time-sensitive multimodal large language model for long video understanding". Ren S, Yao L, Li S, et al.. CVPR 2024. [Paper] [Github].
- ViLMA "ViLMA: A Zero-Shot Benchmark for Linguistic and Temporal Grounding in Video-Language Models". Kesen I, Pedrotti A, Dogan M, et al.. ICLR 2024. [Paper] [Github].
- VITATECS "VITATECS: A Diagnostic Dataset for Temporal Concept Understanding of Video-Language Models". Li S, Li L, Ren S, et al.. arXiv 2023. [Paper] [Github].
- TempCompass "TempCompass: Do Video LLMs Really Understand Videos?". Liu Y, Li S, Liu Y, et al.. arXiv 2024. [Paper] [Github].
- OSCaR "OSCaR: Object State Captioning and State Change Representation". Nguyen N, Bi J, Vosoughi A, et al.. arXiv 2024. [Paper] [Github].
- ADLMCQ "LLAVIDAL: Benchmarking Large Language Vision Models for Daily Activities of Living". Chakraborty R, Sinha A, Reilly D, et al.. arXiv 2024. [Paper] [Github].
- Perception Test "Perception Test: A Diagnostic Benchmark for Multimodal Video Models". Patraucean V, Smaira L, Gupta A, et al.. NeurIPS2024. [Paper] [Github].
Long Video Understanding
- MovieChat-1k "Moviechat: From dense token to sparse memory for long video understanding". **. . [Paper] [Github].
- EgoSchema "EgoSchema: A Diagnostic Benchmark for Very Long-form Video Language Understanding". **. . [Paper] [Github].
- Event-Bench "Towards Event-oriented Long Video Understanding". **. . [Paper] [Github].
- MLVU "MLVU: A Comprehensive Benchmark for Multi-Task Long Video Understanding". **. . [Paper] [Github].
Comprehensive Evaluation
- Video-Bench "Video-Bench: A Comprehensive Benchmark and Toolkit for Evaluating Video-based Large Language Models". Ning M, Zhu B, Xie Y, et al.. arXiv 2023. [Paper] [Github].
- MMBench-Video "MMBench-Video: A Long-Form Multi-Shot Benchmark for Holistic Video Understanding". Fang X, Mao K, Duan H, et al.. arXiv 2024. [Paper] [Github].
- Video-MME "Video-MME: The First-Ever Comprehensive Evaluation Benchmark of Multi-modal LLMs in Video Analysis". Fu C, Dai Y, Luo Y, et al.. arXiv 2024. [Paper] [Github].
- AutoEval-Video "AutoEval-Video: An Automatic Benchmark for Assessing Large Vision Language Models in Open-Ended Video Question Answering". Chen X, Lin Y, Zhang Y, et al.. arXiv 2023. [Paper] [Github].
- MMWorld "MMWorld: Towards Multi-discipline Multi-faceted World Model Evaluation in Videos". He X, Feng W, Zheng K, et al.. arXiv 2024. [Paper] [Github].
- WorldNet "WorldGPT: Empowering LLM as Multimodal World Model". Ge Z, Huang H, Zhou M, et al.. arXiv 2024. [Paper] [Github].
- Dynamic-SUPERB "Dynamic-superb: Towards a dynamic, collaborative, and comprehensive instruction-tuning benchmark for speech". Huang C, Lu K H, Wang S H, et al.. ICASSP 2024. [Paper] [Github].
- MuChoMusic "MuChoMusic: Evaluating Music Understanding in Multimodal Audio-Language Models". Weck B, Manco I, Benetos E, et al.. arXiv 2024. [Paper] [Github].
- AIR-Bench "AIR-Bench: Benchmarking Large Audio-Language Models via Generative Comprehension". Yang Q, Xu J, Liu W, et al.. arXiv 2024. [Paper] [Github].
- ScanQA "ScanQA: 3D Question Answering for Spatial Scene Understanding". Azuma D, Miyanishi T, Kurita S, et al.. CVPR 2022. [Paper] [Github].
- ScanReason "ScanReason: Empowering 3D Visual Grounding with Reasoning Capabilities". Zhu C, Wang T, Zhang W, et al.. arXiv 2024. [Paper] [Github].
- LAMM "LAMM: Language-Assisted Multi-Modal Instruction-Tuning Dataset, Framework, and Benchmark". Yin Z, Wang J, Cao J, et al.. NeurIPS 2024. [Paper] [Github].
- SpatialRGPT "SpatialRGPT: Grounded Spatial Reasoning in Vision Language Model". Cheng A C, Yin H, Fu Y, et al.. arXiv 2024. [Paper] [Github].
- M3DBench "M3DBench: Let’s Instruct Large Models with Multi-modal 3D Prompts". Li M, Chen X, Zhang C, et al.. arXiv 2023. [Paper] [Github].
- MCUB "Model Composition for Multimodal Large Language Models". Chen C, Du Y, Fang Z, et al.. arXiv 2024. [Paper] [Github].
- AVQA "AVQA: A Dataset for Audio-Visual Question Answering on Videos". Yang P, Wang X, Duan X, et al.. MM 2022. [paper] [Github].
- MusicAVQA "Learning to Answer Questions in Dynamic Audio-Visual Scenarios". Li G, Wei Y, Tian Y, et al.. CVPR 2022. [Paper] [Github].
- MMT-Bench "MMT-Bench: A Comprehensive Multimodal Benchmark for Evaluating Large Vision-Language Models Towards Multitask AGI". Ying K, Meng F, Wang J, et al.. arXiv 2024. [paper] [Github].