-
Notifications
You must be signed in to change notification settings - Fork 324
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[ExecuTorch][xnnpack] Pass QP8 flag only for Kleidi QB4W #6220
Conversation
🔗 Helpful Links🧪 See artifacts and rendered test results at hud.pytorch.org/pr/pytorch/executorch/6220
Note: Links to docs will display an error until the docs builds have been completed. This comment was automatically generated by Dr. CI and updates every 15 minutes. |
c190876
to
b384ff7
Compare
@digantdesai has imported this pull request. If you are a Meta employee, you can view this diff on Phabricator. |
@digantdesai has imported this pull request. If you are a Meta employee, you can view this diff on Phabricator. |
@digantdesai has imported this pull request. If you are a Meta employee, you can view this diff on Phabricator. |
For testing, enable Kleidi using the following changes and run the llama models on Android with 8da4w and 8da8w linears. ``` -DEXECUTORCH_BUILD_XNNPACK=ON \ + -DEXECUTORCH_XNNPACK_ENABLE_KLEIDI=ON \ ``
@digantdesai has imported this pull request. If you are a Meta employee, you can view this diff on Phabricator. |
@digantdesai merged this pull request in 065d480. |
No description provided.