Skip to content

A better Alpaca Model Trained with Less Data (only 9k instructions of the original set)

Notifications You must be signed in to change notification settings

Lichang-Chen/AlpaGasus

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

46 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

AlpaGasus: Training a Better Alpaca with Fewer Data (ICLR 2024)

Lichang Chen*, Shiyang Li*, Jun Yan, Hai Wang, Kalpa Gunaratna, Vikas Yadav, Zheng Tang, Vijay Srinivasan, Tianyi Zhou, Heng Huang, Hongxia Jin

*Denotes equal contribution


Our Model "AlpaGasus"is pronounced as "/ˈælpəˈɡeɪsəs/", or "/ˈælpəˈɡəsəs/". The logo is generated by Midjourney

News

  • [2023.7] We release our paper. If you have any questions about our project, please send email to bobchen@umd.edu
  • [2023.9] Thanks to @GPT4animal for reimplementing the results in our paper. Please check this fantastic repo: https://github.com/gpt4life/alpagasus.
  • [2023.9] Thanks to @gauss5930 and @YooYunS who implemented the QLoRA version of Alpagasus-7B and 13B, which could be run on the customer-level GPUs. please refer to their repo: Alpagasus2-QLoRA They also show that tuning LLaMA-2 could achieve better performance.

Citation

If you find our paper useful, please consider citing:

@inproceedings{
    chen2024alpagasus,
    title={AlpaGasus: Training a Better Alpaca with Fewer Data},
    author={Lichang Chen and Shiyang Li and Jun Yan and Hai Wang and Kalpa Gunaratna and Vikas Yadav and Zheng Tang and Vijay Srinivasan and Tianyi Zhou and Heng Huang and Hongxia Jin},
    booktitle={The Twelfth International Conference on Learning Representations},
    year={2024},
    url={https://openreview.net/forum?id=FdVXgSJhvz}
}

About

A better Alpaca Model Trained with Less Data (only 9k instructions of the original set)

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages