Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[FRONTEND] Fix triton.language.dtype repr #3342

Merged
merged 1 commit into from
Mar 12, 2024
Merged

Conversation

oulgen
Copy link
Contributor

@oulgen oulgen commented Mar 12, 2024

While working on adding triton.language.dtype to PyTorch, I discovered that triton.language.float32 emits triton.language.fp32 which I assume is done in order to be consistent with numpy and torch; however, this results in an un-evaluable expression as triton.language.fp32 does not correspond to anything.
I thought of adding aliases to float32 as fp32 but that meant duplicating bunch of dtypes. This solution seems cleaner.

@@ -360,13 +360,21 @@ def to_ir(self, builder: ir.builder) -> ir.type:
def __str__(self):
return self.name

def codegen_name(self):
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Could you add a comment explaining why repr should be different from str?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Added to repr function

def test_dtype_codegen():
assert repr(triton.language.float8e4b15x4) == "triton.language.float8e4b15x4"
assert repr(triton.language.float16) == "triton.language.float16"
assert repr(triton.language.bfloat16) == "triton.language.bfloat16"
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is it worth checking all of the dtypes? (i.e. iterating over dtype.SINT_TYPES, UINT_TYPES, etc?)

Copy link
Collaborator

@jlebar jlebar left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thank you!

@jlebar jlebar enabled auto-merge (squash) March 12, 2024 17:54
@jlebar jlebar merged commit b48fd2a into triton-lang:main Mar 12, 2024
4 checks passed
htyu pushed a commit to htyu/triton that referenced this pull request Mar 20, 2024
While working on adding `triton.language.dtype` to PyTorch, I discovered
that `triton.language.float32` emits `triton.language.fp32` which I
assume is done in order to be consistent with numpy and torch; however,
this results in an un-evaluable expression as `triton.language.fp32`
does not correspond to anything.
I thought of adding aliases to float32 as fp32 but that meant
duplicating bunch of dtypes. This solution seems cleaner.
karupayun pushed a commit to openxla/triton that referenced this pull request Apr 3, 2024
While working on adding `triton.language.dtype` to PyTorch, I discovered
that `triton.language.float32` emits `triton.language.fp32` which I
assume is done in order to be consistent with numpy and torch; however,
this results in an un-evaluable expression as `triton.language.fp32`
does not correspond to anything.
I thought of adding aliases to float32 as fp32 but that meant
duplicating bunch of dtypes. This solution seems cleaner.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants