Table of Contents
This project involves comparing LLMs based on the fine-tuning methods within the Hugging Face PEFT framework for specified downstream tasks. This comparison aims to provide relevant experimental data as a reference for other development tasks.
- Prepare datasets
- Pre-process data
- Filtering
- Create models
- Sequence classification model
- Implement fine-tuning methods
- P-Tuning
- Prefix Tuning
- LoRA
- Experiments on W&B
https://api.wandb.ai/links/yuchengml/1ev46x8s
Distributed under the MIT License. See LICENSE
for more information.