CodeScope: An Execution-based Multilingual Multitask Multidimensional Benchmark for Evaluating LLMs on Code Understanding and Generation
CodeScope, an execution-based, multilingual, multi-task, multi-dimensional evaluation benchmark for comprehensively gauging LLM capabilities on coding tasks. CodeScope covers 43 programming languages and 8 coding tasks. It evaluates the coding performance of LLMs from three dimensions (perspectives): difficulty, efficiency, and length.
- [2024.05.15] CodeScope was accepted into the ACL 2024 Main Conference, thanking the academic community for its recognition.
- [2023.11.15] 🎉🎉🎉 CodeScope is published!🎉🎉🎉
🤗Hugging Face or Google Drive or Github Data
CodeScope evaluates the comprehensive ability of LLMs in code understanding and code generation from eight coding tasks.
Please cite the paper if you use the data or code from CodeScope.
@misc{yan2023codescope,
title={CodeScope: An Execution-based Multilingual Multitask Multidimensional Benchmark for Evaluating LLMs on Code Understanding and Generation},
author={Weixiang Yan and Haitian Liu and Yunkun Wang and Yunzhe Li and Qian Chen and Wen Wang and Tingyu Lin and Weishan Zhao and Li Zhu and Shuiguang Deng and Hari Sundaram},
year={2023},
eprint={2311.08588},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
For questions, please feel free to reach out via email at weixiangyan@ucsb.edu
.