-
Notifications
You must be signed in to change notification settings - Fork 90
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Use sycl::bfloat16
class and functions instead of float casts.
#1341
Comments
sycl::bfloat16
class and functions instead of float casts.
@JackAKirk the reference PR #1286 for sentence "The bfloat16 class has been non-experimental for a while now, supporting all backends" is incorrect, could you provide the correct PR? So that we can double confirm that bfloat16 class has been non-experimental. |
Sorry I meant this one: intel/llvm#6524 Note actually that I forgot bfloat16 math functions are still in the experimental namespace intel/llvm#7567 |
I think it would be OK to move the math functions out of experimental. @gmlueck do you have an opinion? |
I think this could be OK. I'd like to consider merging the math functions into the base extension for bfloat16, though, rather than having two separate extensions. |
Sounds good to me. I'd be happy to draft a PR merging the two extensions. |
@gmlueck @JackAKirk |
PR is here: intel/llvm#11506 |
Hi, @JackAKirk. The PR (intel/llvm#11506) is in draft status for about one year, so we need wait. |
The bfloat16 class has been non-experimental for a while now, supporting all backends: #1286
However SYCLomatic appears to be not be using this, and instead just always casting to float, see e.g.#1286.
This seems to be a lost opportunity. For example there are native cuda bfloat16 implementations of bfloat16 math functions in DPC++ that make bfloat16 math much faster than using casts to float.
The text was updated successfully, but these errors were encountered: