Skip to content

LoRA weights for Cerebras-GPT-2.7b finetuned on Alpaca dataset with shorter prompt

Notifications You must be signed in to change notification settings

nalanqingcheng/cerebras-lora-alpaca

 
 

Repository files navigation

title emoji colorFrom colorTo sdk sdk_version app_file pinned license
Lora Cerebras Gpt2.7b Alpaca Shortprompt
🐨
yellow
pink
gradio
3.23.0
app.py
false
apache-2.0

🦙🐕🧠 Cerebras-GPT2.7B LoRA Alpaca ShortPrompt

Open In Colab Open In Spaces

Scripts to finetune Cerebras GPT2.7B on the Alpaca dataset, as well as inference demos.

📈 Warnings

The model tends to be pretty coherent, but it also hallucinates a lot of factually incorrect responses. Avoid using it for anything requiring factual correctness.

📚 Instructions

  1. Be on a machine with an NVIDIA card with 12-24 GB of VRAM.

  2. Get the environment ready

conda create -n cerberas-lora python=3.10
conda activate cerberas-lora
conda install -y cuda -c nvidia/label/cuda-11.7.0
conda install -y pytorch=1.13.1 pytorch-cuda=11.7 -c pytorch
  1. Clone the repo and install requirements
git clone https://github.com/lxe/cerebras-lora-alpaca.git && cd !!
pip install -r requirements.txt
  1. Run the inference demo
python app.py

To reproduce the finetuning results, do the following:

  1. Install jupyter and run it
pip install jupyter
jupyter notebook
  1. Navigate to the inference.ipynb notebook and test out the inference demo.

  2. Navigate to the finetune.ipynb notebook and reproduce the finetuning results.

  • It takes about 5 hours with the default settings
  • Adjust the batch size and gradient accumulation steps to fit your GPU

📝 License

Apache 2.0

About

LoRA weights for Cerebras-GPT-2.7b finetuned on Alpaca dataset with shorter prompt

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Jupyter Notebook 92.9%
  • Python 7.1%