Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

ADD libllama.so target for llama-cpp-python #797

Merged

Conversation

bhubbb
Copy link
Contributor

@bhubbb bhubbb commented Apr 5, 2023

I could get llama-cpp-python working but only when I built libllama.so with make and then replaced the copy of libllama.so that came with llama-cpp-python .

I wanted to add this here to help with the llama-cpp-python project.

I was able to get llama-cpp-python working but only when I build libllama.so with make.
@SagsMug
Copy link

SagsMug commented Apr 6, 2023

Did you try mkdir lib && cd lib && cmake -DBUILD_SHARED_LIBS=1 ..?

@bhubbb
Copy link
Contributor Author

bhubbb commented Apr 6, 2023

Not manually, but I believe this is what llama-cpp-python and it would just through segmentation faults for me until built by make.

I have not had a lot of luck with cmake builds on my system.

@ggerganov ggerganov merged commit 698f7b5 into ggerganov:master Apr 7, 2023
@pjlegato
Copy link

pjlegato commented Apr 7, 2023

Context: Using make sets -march=native in the Makefile, which automatically chooses the best instruction set extensions available for the current CPU.

The CMakeLists supplied with llama.cpp manually specifies certain instruction set extensions, which then fails with "illegal instruction" errors unless the current CPU happens to support those pre-chosen instructions.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants