Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Register all tensor tiling ops #13042

Closed
wants to merge 1 commit into from

Conversation

nicolasvasilache
Copy link
Contributor

@nicolasvasilache nicolasvasilache commented Apr 12, 2023

Previously, we would single out tensor.pad which would incorrectly make
all transforms in IREE think that this is not an op that supports TilingInterface,
in violation of the upstream semantics.

Copy link
Contributor

@MaheshRavishankar MaheshRavishankar left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

IREE already registers the pack and unpack interfaces.

Can you point me to an iree-opt + transform example that tiles pack / unpack?

Pad interface is not supported yet (that is WIP)

I am not sure what this means, I am using it (locally)...
Without this registration, iree-opt fails on a simple dyn_cast<TilingInterface>(padOp), what other solution do yo urecommend?

Previously, we would single out tensor.pad which would incorrectly make
all transforms in IREE think that this is not an op that supports TilingInterface,
in violation of the upstream semantics.
@nicolasvasilache nicolasvasilache changed the title Add tensor external model registration to Interfaces Register all tensor tiling ops Apr 12, 2023
@nicolasvasilache
Copy link
Contributor Author

@MaheshRavishankar ok I think I understand your comment upon digging further, I updated the PR to do what I think you meant, please LMK.

Could you elaborate on why we had to segregate tensor.pad in this way?
I don't think I have seen precedent for this in MLIR yet..

@MaheshRavishankar
Copy link
Contributor

@MaheshRavishankar ok I think I understand your comment upon digging further, I updated the PR to do what I think you meant, please LMK.

Yeah you found where pack/unpack are registered, but bringing in pad implementation causes issues. (see below).

Could you elaborate on why we had to segregate tensor.pad in this way? I don't think I have seen precedent for this in MLIR yet..

The tiling implementation of tensor.pad is really painful to handle. The issue is the scf.if that is generated to handle the case where the extract only returns the padded value. Even if pad alone can be handled in some fashion, the fusion of pad with producers causes real issues. I have been experimenting with this in #10184 (long running PR that I push parts to main as and when possible, but it is close to landing completely). In particular I am trying to change the tiling implementation of tensor.pad to do https://github.com/iree-org/iree-llvm-fork/compare/d3aed4f401fa35ea986d3967c529f4d2b24e2bb6...6a812487df8969517e38038f14827c3429b23946 . That works better but still needs some work for the pad fused with producer cases...

I am wondering if you have any thoughts on changing the pad tiling implementation to not generate the scf.if or have a version of the tiling implementation that doesnt generate the if and allows downstream implementations (like IREE) take on the burden of making sure that the generated code will never require the if-else. I think if we make sure that the tile size is never small enough to read from just the padded value, we should be able to avoid generating the if-else. Ideally we could just fold the if away by proving that tile size is never going to go into just the padding, but that doesnt work today upstream AFAIK.

For the problem at hand, we could add a separate transformation for pad ops that by-passes the TilingInterface implementation for the pad operation, and uses the bubbleUpPadSlice method directly. If that is too painful, I can land some of the changes from the above PR to get around the issues from having the scf.if generated. We can chat more on GVC....

@MaheshRavishankar MaheshRavishankar changed the title Register all tensor tiling ops Register all tensor tiling ops Apr 12, 2023
@nicolasvasilache
Copy link
Contributor Author

We can chat more on GVC....

SG, let's chat tomorrow as there are quite a few moving pieces that are converging on the unaligned matmul side.

@benvanik
Copy link
Collaborator

Closing stale PR.

@benvanik benvanik closed this May 14, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants