Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Investigate using make instead of cmake to build shared library #20

Closed
abetlen opened this issue Apr 4, 2023 · 3 comments
Closed

Investigate using make instead of cmake to build shared library #20

abetlen opened this issue Apr 4, 2023 · 3 comments
Labels
good first issue Good for newcomers help wanted Extra attention is needed

Comments

@abetlen
Copy link
Owner

abetlen commented Apr 4, 2023

It's been pointed out that make may be better supported by llama.cpp (on some platforms). We're currently using scikit-build to build the shared library on installation with cmake but it also supports make.

Additional note as pointed out in #32 we should support passing environment variables in both settings.

@abetlen abetlen added help wanted Extra attention is needed good first issue Good for newcomers labels Apr 4, 2023
@pjlegato
Copy link

pjlegato commented Apr 7, 2023

With current master branches from both projects, I can get it to work by going into vendor/llama.cpp, then doing make libllama.so, then manually copying that .so to the main project's llama_cpp directory (as suggested at ggerganov/llama.cpp#797). The automatic build process doesn't work.

@abetlen
Copy link
Owner Author

abetlen commented Apr 7, 2023

https://cmake.org/pipermail/cmake/2010-November/040631.html suggests adding a custom target to the root CMakeLists of this project and calling make in llama.cpp. I'll try this approach.

@abetlen
Copy link
Owner Author

abetlen commented Apr 8, 2023

@pjlegato I've pushed a fix to build the project using make on linux and macos, do you mind testing it for me? If you can confirm that it works I'll push to PyPI.

EDIT: Got confirmation this worked, going to close this issue.

@abetlen abetlen closed this as completed Apr 8, 2023
xaptronic pushed a commit to xaptronic/llama-cpp-python that referenced this issue Jun 13, 2023
* Adding repeat penalization

* Update utils.h

* Update utils.cpp

* Numeric fix

Should probably still scale by temp even if penalized

* Update comments, more proper application

I see that numbers can go negative so a fix from a referenced commit

* Minor formatting

---------

Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
good first issue Good for newcomers help wanted Extra attention is needed
Projects
None yet
Development

No branches or pull requests

2 participants