Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

FEAT: Refactor device related code and add initial Intel GPU support #968

Merged
merged 5 commits into from
Feb 21, 2024

Conversation

notsyncing
Copy link
Contributor

@notsyncing notsyncing commented Feb 2, 2024

Refactors most device related code into device_utils (only pytorch backend, vllm and ctransformers are unsupported, 8bit and 4bit are also unsupported), and adds initial Intel GPU support.

You will need intel-extension-for-pytorch to run it: https://intel.github.io/intel-extension-for-pytorch/xpu/latest/tutorials/installation.html

Tested on Llama2-chat and ChatGlm3.

(for device_map, requires huggingface/accelerate#2383)

@XprobeBot XprobeBot added this to the v0.8.2 milestone Feb 2, 2024
@notsyncing notsyncing force-pushed the intel-gpu-support branch 2 times, most recently from 19332e9 to f7b70f9 Compare February 2, 2024 07:20
@XprobeBot XprobeBot modified the milestones: v0.8.2, v0.8.4 Feb 2, 2024
@aresnow1
Copy link
Contributor

aresnow1 commented Feb 2, 2024

Wow, impressive work!

@aresnow1
Copy link
Contributor

aresnow1 commented Feb 2, 2024

Lint failed, we use pre-commit(https://pre-commit.com/) to lint before commit, you can install it on local and commit again.

@notsyncing
Copy link
Contributor Author

@aresnow1 All lint errors fixed, and tested again on chatglm3.

@qinxuye
Copy link
Contributor

qinxuye commented Feb 5, 2024

I solved the lint for you, other than flake8, we use black to format code and isort to sort the imports.

Copy link
Contributor

@aresnow1 aresnow1 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM overall, some details need to be confirmed.

xinference/device_utils.py Outdated Show resolved Hide resolved
xinference/model/llm/pytorch/core.py Show resolved Hide resolved
@notsyncing notsyncing force-pushed the intel-gpu-support branch 3 times, most recently from 3a9ceb9 to 88f2214 Compare February 5, 2024 12:08
@XprobeBot XprobeBot modified the milestones: v0.8.5, v0.9.0 Feb 6, 2024
@notsyncing notsyncing force-pushed the intel-gpu-support branch 2 times, most recently from 88f0935 to 7dafcf1 Compare February 12, 2024 14:11
@aresnow1
Copy link
Contributor

Hi, could you rebase main branch and push again?

@notsyncing
Copy link
Contributor Author

Hi, could you rebase main branch and push again?

Rebased and pushed.

@aresnow1 aresnow1 merged commit 9d81afc into xorbitsai:main Feb 21, 2024
9 of 12 checks passed
@aresnow1
Copy link
Contributor

Looks good to me, thanks for your contribution.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants