You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
{{ message }}
This repository has been archived by the owner on Jun 24, 2024. It is now read-only.
At present, llama.cpp contains a Python script that converts pth to ggml format.
It would be nice to build it into the CLI directly, so that you can load the original model files. The original Python script could also be converted to Rust, so that we have a fully-Rust method of converting pth to ggml models.
The text was updated successfully, but these errors were encountered:
We should be able to use this to convert tensors to GGML format. In future, we can directly load tensors (I may separate that out into a new issue), but our focus is on loading tensors so that they can be quantised by #84 and used by llama-cli.
Sign up for freeto subscribe to this conversation on GitHub.
Already have an account?
Sign in.
At present,
llama.cpp
contains a Python script that convertspth
toggml
format.It would be nice to build it into the CLI directly, so that you can load the original model files. The original Python script could also be converted to Rust, so that we have a fully-Rust method of converting
pth
to ggml models.The text was updated successfully, but these errors were encountered: