-
Notifications
You must be signed in to change notification settings - Fork 10.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Replacing call to convert-pth-to-ggml.py
with convert.py
#1641
Replacing call to convert-pth-to-ggml.py
with convert.py
#1641
Conversation
Deprecation disclaimer was added to convert-pth-to-ggml.py Signed-off-by: Jiri Podivin <jpodivin@gmail.com>
Convert.py gives me this error consistently on many different computers and operating systems: |
Hi. Could you give me the full trace of this? Along with the arguments used in CLI? I haven't actually changed the |
That probably means the file you're trying to convert is faulty. |
But the pth file still has the same checksum as in my checklist.chk. |
I checked my files and the checksums look fine. It happens with every model size. Here is my CLI call and output: python3 -m trace --count -C . convert.py models/7B Intresstingly it loads loading model file models/7B/consolidated.01.pth which is 0KB. That is the reason for the error... when moving the 0KB pth files in another directory, it works! |
7B has only one pth file: |
I have no idea where the 0kb pth parts came from but they did cost me a lot of time... |
In that case I suppose it really is not related to this patch and we can get it merged. Unless there is something else going wrong? |
Tested, it works. However I can't seem to pass an argument with spaces to |
Yes, that is I believe due to the way the endpoint argument processing works. Shell isn't exactly known for dealing well with multiple levels of quoting. I do have a rewrite into python in the works though. |
I don't really see the need for an entrypoint script at all, actually. you could put something like |
TBH, this is a UI/UX question. Adapting container paradigm fundamentally means that there is now additional layer between user and the tool. How should that layer behave depends on the use case. For llama.cpp we are talking about a set of CLI tools, each with a special purpose. It could be argued that at some point, although not necessarily today, it may make sense to unify them all behind one CLI wrapper. Which is how the container with an endpoint effectively behaves. |
Since Python is already in the image, I guess a Python script is a better way to do it.
|
I would say that if there should be a unified CLI for the entire toolchain, that is convert, quantize, main and checksum, we should probably start building it outside of the container. When it is finished we can always just add the magic line in container file. But in that case it would probably be better to use proper python bindings, rather than an adhoc script calling binaries. |
Well, we have a cool server coming in #1570 that doesn't need Python or any specific language. |
In that case it would be better to just wait for it. And use that as an
endpoint for containers. No need to duplicate things.
…On Sat, Jun 3, 2023 at 9:22 PM Henri Vasserman ***@***.***> wrote:
Well, we have a cool server coming in #1570
<#1570> that doesn't need
Python or any specific language.
—
Reply to this email directly, view it on GitHub
<#1641 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/APZOTD5TE6QHX4XPFEGW2FLXJOFGHANCNFSM6AAAAAAYS76VAM>
.
You are receiving this because you authored the thread.Message ID:
***@***.***>
|
If the python script as an entry point for container does make sense, I would keep it simple. Have it implement absolute minimum of logic needed to call other scripts/binaries and rely on user to invoke the right args. After all, it is probably going to be used in some sort of semi-automated, or even fully automated manner. Like this #1686 CLI proper is a different thing. In that case the existing |
I am also getting the same error: I inserted the downloaded model to the directory -> models/llama-2-7b-chat Then I copied and pasted: python convert-pth-to-ggml.py models/llama-2-7b-chat/ 1 It returns the same error! Looking for your helps |
Deprecation disclaimer was added to
convert-pth-to-ggml.py
I'm keeping the script around for now, as it may be needed for backward compat.
That being said, removing it in near future would be prudent.
I've verified that the
convert.py
can take other arguments when called in container.Although I haven't tested all possible combinations.
#1628