Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Publish all wheels to PyPI #741

Open
simonw opened this issue Sep 20, 2023 · 3 comments
Open

Publish all wheels to PyPI #741

simonw opened this issue Sep 20, 2023 · 3 comments
Labels
enhancement New feature or request

Comments

@simonw
Copy link

simonw commented Sep 20, 2023

It looks like PyPI only has the source distribution for each release: https://pypi.org/project/llama-cpp-python/0.2.6/#files

CleanShot 2023-09-20 at 12 25 12@2x

But the GitHub release at https://github.com/abetlen/llama-cpp-python/releases/tag/v0.2.6 lists many more files than that:

CleanShot 2023-09-20 at 12 25 51@2x

Would it be possible to push those wheels to PyPI as well?

I'd love to be able to pip install llama-cpp-python and get a compiled wheel for my platform.

@abetlen
Copy link
Owner

abetlen commented Oct 1, 2023

Hey @simonw! Big fan of your datasette project.

I hear you and I would like to make the setup process a little easier and less error-prone.

Currently llama.cpp supports a number of optional accelerations including several BLAS libraries, CUDA versions, OpenCL, and Metal. In theory I could build a pre-built wheel that just includes a version of llama.cpp with no real accelerations enabled but I feel like this is counterintuitive to the goal of providing users with the fastest local inference for their hardware.

I'm open to suggestions though, and I'll try to think of some possible solutions.

@simonw
Copy link
Author

simonw commented Oct 5, 2023

Two approaches I can think of trying that might work are:

  • publish separate wheels for each platform, with separate names
  • publish one large wheel that bundles the different versions together and has code that can pick the "right" one

For that first option, one way that could work is to have a llama-cpp-python package which everyone installs but which doesn't actually work until you install one of the "backend" packages: llama-cpp-python-cuda-12 or llama-cpp-python-metal or similar.

How large are the different binaries? If all of them could be bundled in a single wheel that was less than 50MB then that could be a neat solution, if you can write code that can detect which one to use.

You could even distribute that as llama-cpp-python-bundle and tell people to install that one if they aren't sure which version would work best for them.

it's a tricky problem though! I bet there are good options I've missed here.

@abetlen abetlen added the enhancement New feature or request label Dec 22, 2023
@abetlen abetlen mentioned this issue Mar 3, 2024
@abetlen
Copy link
Owner

abetlen commented Apr 4, 2024

Hey @simonw it took a while but this is finally possible through a self-hosted PEP503 repository on Github Pages (see #1247)

You should now be able to specify

pip install llama-cpp-python --extra-index-url https://abetlen.github.io/llama-cpp-python/whl/cpu

on the CLI or

 --extra-index-url https://abetlen.github.io/llama-cpp-python/whl/cpu
llama-cpp-python

in a requirements.txt to install pre-built binary version of llama-cpp-python.

The PR also includes initial support for Metal and CUDA wheels though I had to limit the number of supported Python and CUDA versions to avoid a combinatorial explosion in the number of builds.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

No branches or pull requests

2 participants