You are calling save_pretrained
to a 4-bit converted model, but your bitsandbytes
version doesn't support it.
#3951
Labels
llm
Large Language Model related
Describe the bug
I have enabled 4-bit quantization for fine tuning mistralai/Mistral-7B-v0.1. Seems like Ludwig 0.10.1 depends on bitsandbytes < 0.41.0. But when I run the trainer I get the following warning:
To Reproduce
Steps to reproduce the behavior:
model.yaml
):ludwig train --config model.yaml --dataset "ludwig://alpaca"
Expected behavior
Should not show the warning on
bitsandbytes
version not supportingsave_pretrained
for 4-bit quantization.Environment (please complete the following information):
@alexsherstinsky
The text was updated successfully, but these errors were encountered: