Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Cherry pick: Allow creation of TCP groups where an op has multiple uses #76

Conversation

srinathava
Copy link
Contributor

Cherry picking #74

We previously only allowed an op to have a single use during the creation a group for it. This PR relaxes that to allow multiple uses as long as all the uses belong to the same region.

sjain-stanford and others added 30 commits October 26, 2023 17:02
…uise-automation#3)

This is useful for ML ranker C7:
```
ERROR: [RoboCompiler] - RoboCompiler step 'RoboCompiler Optimize' failed with the following error:
cruise/mlp/robotorch/project/trajectory_ranking/model_architectures/per_npc_ranker.py:712:0: error: failed to legalize operation 'torch.aten.index.Tensor_hacked_twin' that was explicitly marked illegal
```
TorchToTosa doesn't match the AtenIndexTensorHackedTwinOp due to [this](https://sourcegraph.robot.car/github.robot.car/cruise/mla-robocomp-torch-mlir/-/blob/lib/Conversion/TorchToTosa/TorchToTosa.cpp?L3788) constraint:
```
    // Right now only support multiple indexes with same shape
    // TODO for different shape multiple indexes, add broadcast_to for small
    // shape
    for (auto indexShapeOneDim : indexesShape) {
      if (!llvm::equal(indexesShape[0], indexShapeOneDim)) {
        return rewriter.notifyMatchFailure(
            op, "unimplemented: Only support multi indexes with same shape");
      }
    }
```
where we see multiple index tensors of different shapes:
```
%278 = torch.prim.ListConstruct %277, %273, %270, %205 : (!torch.vtensor<[1,1,1,1],si64>, !torch.vtensor<[24,1,1],si64>, !torch.vtensor<[2,1],si64>, !torch.vtensor<[40],si64>) -> !torch.list<vtensor> 
%279 = torch.aten.index.Tensor_hacked_twin %arg9, %278 : !torch.vtensor<[1,24,2,40],f32>, !torch.list<vtensor> -> !torch.vtensor<[1,24,2,40],f32>
```
We can dig into supporting this case in TorchToTosa eventually, however this is to unblock the immediate case for C7 where we plan to map this to a data movement kernel (like gather).
Move bottom up fuser declaration to header file

No testing required since it's a minor restructuring

---------

Co-authored-by: Muhammad Abubakar <muhammad.abubakar@getcruise.com>
…-automation#7)

Move bottom up fuser declaration to header file

No testing required since it's a minor restructuring

Cherry-pick from
cruise-automation@c4c94fb
---------

Co-authored-by: Muhammad Abubakar <muhammad.abubakar@getcruise.com>
Bumps TCP to `c4c94fb25f2c3470839796025f40f0487b3b69b6`.
…ion#25)

As titled, lower torch transposed convolution to a custom TCP op to
avoid a mis-compilation in `TorchToTosa`.

---------

Co-authored-by: Srinath Avadhanula <srinath.avadhanula@getcruise.com>
…ruise-automation#25) (cruise-automation#10)

As titled, lower torch transposed convolution to a custom TCP op to
avoid a mis-compilation in `TorchToTosa`.

Cherry-pick from upstream: cruise-automation#25

---------

Co-authored-by: Srinath Avadhanula <srinath.avadhanula@getcruise.com>
Add a utility to aid in converting torch ops to `tcp.custom_op`

---------

Co-authored-by: Srinath Avadhanula <srinath.avadhanula@getcruise.com>
…ruise-automation#12)

Add a utility to aid in converting torch ops to `tcp.custom_op`

Cherry picking cruise-automation#33

---------

Co-authored-by: Srinath Avadhanula <srinath.avadhanula@getcruise.com>
)

Lowering `aten.size.int` op to `tensor::dim` op during torch-to-tcp.

Test (in docker):

`bazel test //test/...`

---------

Co-authored-by: Ze Zhang <ze.zhang@getcruise.com>
…cruise-automation#34)

Conv 1D fails legalization in TorchToTosa, which I believe [only
supports](https://sourcegraph.com/github.com/llvm/torch-mlir@faa4517e83d82348259165412d0744ba776360b3/-/blob/lib/Conversion/TorchToTosa/TorchToTosa.cpp?L1871)
converting aten.convolution for the 2D case:
```
  if (inputTy.getRank() != 4)
    return rewriter.notifyMatchFailure(
        op, "Unimplemented: only 2D convolutions supported");
```

This makes it so we dynamically make just conv2d legal, so the remaining
variants can convert to `tcp.custom_op`.
Updates mlir-tcp to `90768ec2801ed9959144e8a1ca800e34ee2c7f54` resolves
some minor merge conflicts around testing changes in [this
PR](cruise-automation#28).
sjain-stanford and others added 23 commits March 26, 2024 15:33
Replace the tcp custom op for `torch.tensorrt.execute_engine_variadic`
to `tensorrt.execute_engine` for generic usage purpose.

To test, from c/c repo:

`bazel test @mlir-tcp//test/...`
Picks the dynamic legality fixes to TorchToTcp amidst other changes.
Create tensor from index array handle
Remove `LRDPrefilterPrediction` related content

Test from c/c:
`bazel test @mlir-tcp//test/...`
…tomation#74)

We previously only allowed an op to have a single use during the
creation a group for it. This PR relaxes that to allow multiple uses as
long as all the uses belong to the same region.

---------

Co-authored-by: Srinath Avadhanula <srinath.avadhanula@getcruise.com>
@srinathava srinathava closed this Jun 24, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Development

Successfully merging this pull request may close these issues.

5 participants