You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
It's been pointed out that make may be better supported by llama.cpp (on some platforms). We're currently using scikit-build to build the shared library on installation with cmake but it also supports make.
Additional note as pointed out in #32 we should support passing environment variables in both settings.
The text was updated successfully, but these errors were encountered:
With current master branches from both projects, I can get it to work by going into vendor/llama.cpp, then doing make libllama.so, then manually copying that .so to the main project's llama_cpp directory (as suggested at ggerganov/llama.cpp#797). The automatic build process doesn't work.
@pjlegato I've pushed a fix to build the project using make on linux and macos, do you mind testing it for me? If you can confirm that it works I'll push to PyPI.
EDIT: Got confirmation this worked, going to close this issue.
* Adding repeat penalization
* Update utils.h
* Update utils.cpp
* Numeric fix
Should probably still scale by temp even if penalized
* Update comments, more proper application
I see that numbers can go negative so a fix from a referenced commit
* Minor formatting
---------
Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>
It's been pointed out that
make
may be better supported byllama.cpp
(on some platforms). We're currently usingscikit-build
to build the shared library on installation withcmake
but it also supports make.Additional note as pointed out in #32 we should support passing environment variables in both settings.
The text was updated successfully, but these errors were encountered: