You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
When loading fp16 models, it take less than 30 seconds and disk IO is very much occupied, while when loading a BNB 4bit model, i see almost 0 disk IO.
Something related to loading BNB model must have been affecting it a lot.
Possible regression from #26037
I am fully aware that HF now supports llava-next natively and I have yet to try saving bnb 4bit weights from the hf converted weights, but my point is this is not related to the model itself since fp16 loads fine.
The text was updated successfully, but these errors were encountered:
System Info
transformers
version: 4.39.0Who can help?
@SunMarc @ArthurZucker
Information
Tasks
examples
folder (such as GLUE/SQuAD, ...)Reproduction
I am converting the original llava1.6 mistral 7b into BNB 4 bit saved model by inserting
model.save_pretrained()
at https://github.com/haotian-liu/LLaVA/blob/7440ec9ee37b0374c6b5548818e89878e38f3353/llava/model/builder.py#L114The saved model is mere 4.6GB: https://huggingface.co/panoyo9829/llava-v1.6-mistral-7b-bnb-4bit
However, it takes a good 77.8 seconds on a AMD 5800X (8core16 thread) to load it. It takes 9.2seconds on a 80 core Intel Xeon CPU to load. This seems too slow. I am on a NVMe SSD but I see close to 0% disk IO.
Expected behavior
When loading fp16 models, it take less than 30 seconds and disk IO is very much occupied, while when loading a BNB 4bit model, i see almost 0 disk IO.
Something related to loading BNB model must have been affecting it a lot.
Possible regression from #26037
I am fully aware that HF now supports llava-next natively and I have yet to try saving bnb 4bit weights from the hf converted weights, but my point is this is not related to the model itself since fp16 loads fine.
The text was updated successfully, but these errors were encountered: