Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Update setup.py to support multiple device capabilities #56

Merged
merged 1 commit into from
Dec 11, 2023

Conversation

simon-mo
Copy link
Contributor

Currently the setup.py code only target the current device, making it difficult to build wheels that target many architectures. We are facing this problem in distributing vLLM docker images.

This PR adds a block that recognizes the environment variable TORCH_CUDA_ARCH_LIST which will be interpreted by torch's cuda extension to build for multiple architecture.

@tgale96
Copy link
Contributor

tgale96 commented Dec 11, 2023

LGTM! Thanks for the contribution!

@mvpatel2000 would you mind verifying this as well?

@tgale96
Copy link
Contributor

tgale96 commented Dec 11, 2023

I will merge to unblock you and we can revise later if necessary :)

@tgale96 tgale96 merged commit 5897cd6 into databricks:main Dec 11, 2023
@simon-mo
Copy link
Contributor Author

You can verify it by supplying TORCH_CUDA_ARCH_LIST='7.0 7.5 8.0 8.6 8.9 9.0+PTX' (and associating NVCC_THREADS=2, MAX_JOBS=4 to accelerate to the build).

The shared object should contain kernels for different architecture.

cuobjdump megablocks_ops.cpython-310-x86_64-linux-gnu.so

Fatbin elf code:
================
arch = sm_70
code version = [1,7]
host = linux
compile_size = 64bit

Fatbin elf code:
================
arch = sm_75
code version = [1,7]
host = linux
compile_size = 64bit

Fatbin elf code:
================
arch = sm_80
code version = [1,7]
host = linux
compile_size = 64bit

Fatbin elf code:
================
arch = sm_86
code version = [1,7]
host = linux
compile_size = 64bit

Fatbin elf code:
================
arch = sm_89
code version = [1,7]
host = linux
compile_size = 64bit

Fatbin elf code:
================
arch = sm_90
code version = [1,7]
host = linux
compile_size = 64bit

Fatbin ptx code:
================
arch = sm_90
code version = [8,3]
host = linux
compile_size = 64bit
compressed
ptxasOptions = -v

@tgale96
Copy link
Contributor

tgale96 commented Dec 11, 2023

Excellent, thank you! Will you install from the git repo? Or, would you like me to cut an updated PyPi package?

@simon-mo
Copy link
Contributor Author

Thanks! I'm just going to install from git repo for now. No need to cut a release.

@tgale96
Copy link
Contributor

tgale96 commented Dec 11, 2023

Perfect! I can cut a version as part of fixing vllm-project/vllm#2032 that includes this change as well.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants