You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Issues Converting Hugging Face Model to gguf Format
I encountered problems while attempting to convert a Hugging Face model to the gguf format on an Ubuntu system. Here is my environment information:
PyTorch: 2.2.1
CUDA: 12.1.1
Python: 3.11
I used the following commands to clone and prepare the llama.cpp project:
git clone https://github.com/ggerganov/llama.cpp.git
cd llama.cpp
pip3 install -r requirements.txt
I successfully merged the safetensors file and saved it as a PyTorch model. However, I ran into an issue when using the convert.py script to convert the model to the GGUF format with f16 precision. The command I used was:
Issues Converting Hugging Face Model to gguf Format
I encountered problems while attempting to convert a Hugging Face model to the gguf format on an Ubuntu system. Here is my environment information:
I used the following commands to clone and prepare the llama.cpp project:
git clone https://github.com/ggerganov/llama.cpp.git cd llama.cpp pip3 install -r requirements.txt
I successfully merged the safetensors file and saved it as a PyTorch model. However, I ran into an issue when using the convert.py script to convert the model to the GGUF format with f16 precision. The command I used was:
The error that occurred is:
KeyError: 'transformer.h.0.attn.c_attn.bias'
Could anyone assist me in resolving this issue?
An error occurred:KeyError: 'transformer.h.0.attn.c_attn.bias'
Could anyone help me?
The text was updated successfully, but these errors were encountered: