This project is for fine-tuning BLOOM. The repo contains:
- We use Stanford Alpaca.
pip install -r requirements.txt
Data: alpaca
python finetune-alpaca.py \
--per_device_train_batch_size 2 \
--gradient_accumulation_steps 1 \
--num_train_epochs 2 \
--learning_rate 2e-5 \
--fp16 True \
--logging_steps 10 \
--output_dir output
- Add Google Colab example
- Test bloom-560m
- Test bloom-1b7
- Test bloom-7b1
- Support Evaluation