Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add support for returning more than one copy of the same tensor #1228

Merged
merged 1 commit into from
Aug 18, 2022

Conversation

ramiro050
Copy link
Collaborator

One of the simplifications made by the pass RefinePublicReturn
currently only happens if the tensor in question only has one
user. However, the current method of checking this does not correctly
handle the case of a user having multiple uses of the same
tensor. This commit makes sure only unique users are considered.

Copy link
Contributor

@silvasean silvasean left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Fascinating. I always though that getUsers was deduped.

lib/Dialect/Torch/Transforms/RefinePublicReturn.cpp Outdated Show resolved Hide resolved
@silvasean
Copy link
Contributor

Did you actually run into this in a real test? I would have thought that by construction in our pipeline we wouldn't hit this situation of multiple uses. If that's the case, then we can change it to a check that there is only one use.

@ramiro050
Copy link
Collaborator Author

Fascinating. I always though that getUsers was deduped.

Yeah, I thought it was weird too.

Did you actually run into this in a real test? I would have thought that by construction in our pipeline we wouldn't hit this situation of multiple uses. If that's the case, then we can change it to a check that there is only one use.

I ran into this while running the unit tests I had written for my torchdynamo compiler. It is definitely something that can be modified to not return two copies of the same tensor. In real world applications, I imagine this case is not common.

I would have thought that by construction in our pipeline we wouldn't hit this situation of multiple uses

What aspects of the pipeline would guarantee this?

@silvasean
Copy link
Contributor

Did you actually run into this in a real test? I would have thought that by construction in our pipeline we wouldn't hit this situation of multiple uses. If that's the case, then we can change it to a check that there is only one use.

I ran into this while running the unit tests I had written for my torchdynamo compiler. It is definitely something that can be modified to not return two copies of the same tensor. In real world applications, I imagine this case is not common.

Okay, that is real enough. Let's land this.

I would have thought that by construction in our pipeline we wouldn't hit this situation of multiple uses

What aspects of the pipeline would guarantee this?

I was thinking that something could end up inserting one copy per operand of the return op which would guarantee that the return op always had distinct values. But I guess that isn't the case here since you hit it from "user code".

One of the simplifications made by the pass `RefinePublicReturn`
currently only happens if the tensor in question only has one
user. However, the current method of checking this does not correctly
handle the case of a user having multiple uses of the same
tensor. This commit makes sure only unique users are considered.
@ramiro050 ramiro050 merged commit 9bc606c into llvm:main Aug 18, 2022
@ramiro050 ramiro050 deleted the add-tuple-same-tensor-return branch August 18, 2022 22:41
qedawkins pushed a commit to nod-ai/torch-mlir that referenced this pull request Oct 3, 2022
* Replace DLCp by NNPA

Signed-off-by: Tung D. Le <tung@jp.ibm.com>

* Missing files

Signed-off-by: Tung D. Le <tung@jp.ibm.com>

* Clang format

Signed-off-by: Tung D. Le <tung@jp.ibm.com>

* Remove ++

Signed-off-by: Tung D. Le <tung@jp.ibm.com>

* Replace dlc by nnpa

Signed-off-by: Tung D. Le <tung@jp.ibm.com>

* clang format

Signed-off-by: Tung D. Le <tung@jp.ibm.com>

* NNPAompiler to NNPACompiler

Signed-off-by: Tung D. Le <tung@jp.ibm.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants