-
Notifications
You must be signed in to change notification settings - Fork 1.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[FRONTEND] Fix triton.language.dtype repr #3342
Conversation
@@ -360,13 +360,21 @@ def to_ir(self, builder: ir.builder) -> ir.type: | |||
def __str__(self): | |||
return self.name | |||
|
|||
def codegen_name(self): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Could you add a comment explaining why repr should be different from str?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Added to repr function
def test_dtype_codegen(): | ||
assert repr(triton.language.float8e4b15x4) == "triton.language.float8e4b15x4" | ||
assert repr(triton.language.float16) == "triton.language.float16" | ||
assert repr(triton.language.bfloat16) == "triton.language.bfloat16" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is it worth checking all of the dtypes? (i.e. iterating over dtype.SINT_TYPES, UINT_TYPES, etc?)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thank you!
While working on adding `triton.language.dtype` to PyTorch, I discovered that `triton.language.float32` emits `triton.language.fp32` which I assume is done in order to be consistent with numpy and torch; however, this results in an un-evaluable expression as `triton.language.fp32` does not correspond to anything. I thought of adding aliases to float32 as fp32 but that meant duplicating bunch of dtypes. This solution seems cleaner.
While working on adding `triton.language.dtype` to PyTorch, I discovered that `triton.language.float32` emits `triton.language.fp32` which I assume is done in order to be consistent with numpy and torch; however, this results in an un-evaluable expression as `triton.language.fp32` does not correspond to anything. I thought of adding aliases to float32 as fp32 but that meant duplicating bunch of dtypes. This solution seems cleaner.
While working on adding
triton.language.dtype
to PyTorch, I discovered thattriton.language.float32
emitstriton.language.fp32
which I assume is done in order to be consistent with numpy and torch; however, this results in an un-evaluable expression astriton.language.fp32
does not correspond to anything.I thought of adding aliases to float32 as fp32 but that meant duplicating bunch of dtypes. This solution seems cleaner.