Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Torch-MLIR LTC Backend Lowering Codegen #621

Conversation

antoniojkim
Copy link
Collaborator

Adding the codegen script and lowering source to facilitate lowering LTC ATen to MLIR via the Torch-MLIR ATen dialect.

The LTC backend is still far from completion. This is just another step towards a working LTC backend. At the moment, it will build/compile, but it will not be able to run. There are still a ton of unimplemented functionality for which "unimplemented error" placeholders have been added to just get everything to compile.

Next steps include making an example vendor plugin that we can use to test the backend so far. After that, we can start replacing the placeholders with proper implementations.

CC: @wconstab please take a quick glance at how the Torch-MLIR LTC backend is being architected so far. Please feel free to leave any comments about the design.

CC: @henrytwo @ke1337 @emad-cb

@antoniojkim antoniojkim self-assigned this Feb 25, 2022
)
)

backend_path.joinpath("LazyShapeInference.cpp").write_text(
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

note that I've just landed pytorch/pytorch#72756 so you may need to update this part.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

e.g. you probably shouldn't have to do anything at all for shape inference, other than contribute new functions to it if found missing.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@wconstab are lazy shape inference declarations not being autogenerated at all anymore?

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

correct: the new way is the header and impl is handwritten, but the generator spits out a message saying 'missing this function please add it' and gives you a signature you can copy/paste into the header. This way backends can share the inference functions and there isn't an issue that each backend generates a different header that compiles against a shared implementation.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Got it. Thanks!

schema = LazyIrSchema(func)

return f"""
UNIMPLEMENTED_ERROR(
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

i'm curious, how do you plan to code-generate the lowerings to MLIR? i guess you have the aten-mlir dialect and can piggy-back off that?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The design for that is still up in the air. However, I was thinking that I could do something similar to what you did here
https://github.com/pytorch/pytorch/blob/d9896b8b4f198fff43ff8003553c13a26381c7ce/tools/codegen/dest/lazy_ts_lowering.py#L37
And add a generic LowerMlirBuiltin function that handles lowering to MLIR. There is a Torch-MLIR ATen dialect that will be used here, yes.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

yea im less concerned with how you call or generate some function, more curious where the source of truth for the lowering is maintained (And if you had some way of actually generating the lowering as opposed to hand-implementing the lowering)

Copy link
Contributor

@silvasean silvasean Feb 28, 2022

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I would definitely prefer if we went through the JIR IR path rather than directly creating Lazy -> MLIR. That would let us more easily share op decompositions and other infra built at the JIT IR level.

Ideally upstream would allow us to share all the ts_backend code and just need to implement the runtime bits and TS -> MLIR bits, but not LTC Node -> MLIR.

cc @Chillee

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I would definitely prefer if we went through the JIR IR path rather than directly creating Lazy -> MLIR.

I think(?) this makes sense? Like, Torch-MLIR has spent the most time investing in a Torchscript => MLIR lowering, and LazyTensor has spent a lot of effort in a LTC => Torchscript lowering. Is there any reason these couldn't be (relatively) trivially combined together, instead of adding a whole new lowering path for MLIR?

I don't know the other design restrictions here, but that makes sense to me.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yea, I think aside from one issue i'm aware of, which is that we haven't landed the entire TS backend yet and it's tough for the MLIR team to take a c++ build dep on our code until we do land it all and they can pick it up in nightlies, this should be the best plan and was our intent. I'm trying to push for landing it sooner rather than later, but its at least a month out still.

)

# Remove lazy_tensor_core imports
subprocess.check_call(
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

let me know if you have a proposal for how to make this cleaner. I'm working on getting lazy tensor/TorchScript codegen running in the main pytorch build and can consider making some API changes to help here

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In general, removing all lazy_tensor_core includes will be helpful I think. I don't think there is a need for any other backend besides the TorchScript backend to include anything from that directory. The only includes that I think a generic vendor backend should need should be in torch/csrc/lazy

@wconstab
Copy link
Collaborator

hopefully this is a minor issue or actually helpful for you, but FYI i am working on this change to enable torch/xla folks to override functionality in LazyTensor class- it touches a lot of lines of code, using a pointer to LazyTensor instead of actual objects.

pytorch/pytorch#73429

@antoniojkim
Copy link
Collaborator Author

hopefully this is a minor issue or actually helpful for you, but FYI i am working on this change to enable torch/xla folks to override functionality in LazyTensor class- it touches a lot of lines of code, using a pointer to LazyTensor instead of actual objects.

pytorch/pytorch#73429

Thanks for the heads up. I'm sure there will be some conflicts here or there, but a move towards better generalizability is welcome 😄

Copy link
Contributor

@silvasean silvasean left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM! I think that generating the files is a good choice at this stage.

@antoniojkim antoniojkim merged commit 1e3cd96 into llvm:torch_mlir_ltc_backend Feb 26, 2022
@antoniojkim antoniojkim deleted the antoniojkim/torch_mlir_ltc_backend_codegen branch February 26, 2022 00:50
antoniojkim added a commit that referenced this pull request May 26, 2022
* Codegen and build LTC lowering

* Add LazyShapeInference header
antoniojkim added a commit that referenced this pull request Jun 30, 2022
* Codegen and build LTC lowering

* Add LazyShapeInference header
antoniojkim added a commit that referenced this pull request Jun 30, 2022
* Codegen and build LTC lowering

* Add LazyShapeInference header
antoniojkim added a commit that referenced this pull request Jul 5, 2022
* Codegen and build LTC lowering

* Add LazyShapeInference header
antoniojkim added a commit that referenced this pull request Jul 7, 2022
* Codegen and build LTC lowering

* Add LazyShapeInference header
henrytwo pushed a commit that referenced this pull request Jul 8, 2022
* Codegen and build LTC lowering

* Add LazyShapeInference header
henrytwo pushed a commit that referenced this pull request Jul 8, 2022
* Codegen and build LTC lowering

* Add LazyShapeInference header
henrytwo pushed a commit that referenced this pull request Jul 12, 2022
* Codegen and build LTC lowering

* Add LazyShapeInference header
antoniojkim added a commit that referenced this pull request Jul 15, 2022
* Codegen and build LTC lowering

* Add LazyShapeInference header
antoniojkim added a commit that referenced this pull request Jul 19, 2022
* Codegen and build LTC lowering

* Add LazyShapeInference header
antoniojkim added a commit that referenced this pull request Jul 22, 2022
* Codegen and build LTC lowering

* Add LazyShapeInference header
henrytwo pushed a commit that referenced this pull request Jul 29, 2022
* Codegen and build LTC lowering

* Add LazyShapeInference header
henrytwo pushed a commit that referenced this pull request Jul 29, 2022
* Codegen and build LTC lowering

* Add LazyShapeInference header
henrytwo pushed a commit that referenced this pull request Jul 30, 2022
* Codegen and build LTC lowering

* Add LazyShapeInference header
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants