Skip to content

Commit

Permalink
Reference Lazy Backend (#1045)
Browse files Browse the repository at this point in the history
* Changed Example MLIR backend to Reference MLIR backend

* Moved reference_ltc_backend into csrc

* Merged sys_utils.h

* Renamed reference_ltc_backend to reference_lazy_backend

* Addressed review comments

* Update docs with new library name

* Removed _REFERENCE_LAZY_BACKEND from .gitignore

* Added reference_lazy_backend to the TorchMLIRPythonModules dependency list
  • Loading branch information
henrytwo authored and antoniojkim committed Jul 15, 2022
1 parent aa70b49 commit 8ab9dd0
Show file tree
Hide file tree
Showing 28 changed files with 195 additions and 181 deletions.
3 changes: 0 additions & 3 deletions .gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -26,6 +26,3 @@ bazel-*

# Autogenerated files
/python/torch_mlir/csrc/base_lazy_backend/generated

# Example backend
examples/ltc_backend/ltc_backend/_EXAMPLE_MLIR_BACKEND.cpython-37m-x86_64-linux-gnu.so
1 change: 0 additions & 1 deletion CMakeLists.txt
Original file line number Diff line number Diff line change
Expand Up @@ -165,4 +165,3 @@ else()
endif()

add_subdirectory(test)
add_subdirectory(examples)
2 changes: 1 addition & 1 deletion build_tools/autogen_ltc_backend.py
Original file line number Diff line number Diff line change
Expand Up @@ -377,7 +377,7 @@ def extract_signatures(text):
// for ops that dont have a corresponding structured kernel or shape definition
#include "shape_inference.h"
#include "../../utils/exception.h"
#include "../utils/exception.h"
namespace torch {{
namespace lazy {{
{}
Expand Down
19 changes: 11 additions & 8 deletions docs/ltc_backend.md
Original file line number Diff line number Diff line change
Expand Up @@ -60,12 +60,15 @@ Generated files are created in this directory, which is ignored by version contr
- `shape_inference.cpp`
- Implementation of select shape inference functions (most functions are [implemented upstream](https://github.com/pytorch/pytorch/blob/master/torch/csrc/lazy/core/shape_inference.cpp))

### Reference Backend ([`python/torch_mlir/csrc/reference_lazy_backend`](../python/torch_mlir/csrc/reference_lazy_backend))

- `backend_impl.{cpp,h}`
- Reference Torch-MLIR LTC backend implementation, which simply stores the MLIR as a string and executes computation on CPU
- `reference_lazy_backend_pybind.cpp`
- pybind for reference Torch-MLIR LTC backend

### Examples ([`examples`](../examples))

- `examples/ltc_backend/ltc_backend/csrc/backend/backend_impl.{cpp,h}`
- Example Torch-MLIR LTC backend implementation, which simply stores the MLIR as a string and executes computation on CPU
- `examples/ltc_backend/ltc_backend/csrc/example_mlir_backend_pybind.cpp`
- pybind for example Torch-MLIR LTC backend
- `ltc_backend_bert.py`
- Example HuggingFace BERT model traced by LTC to MLIR
- `ltc_backend_mnist.py`
Expand All @@ -77,7 +80,7 @@ Generated files are created in this directory, which is ignored by version contr

The journey begins with a tensor in PyTorch on the `lazy` device, which may undergo a number of operations during its lifetime.
```python
>>> ltc_backend._initialize()
>>> lazy_backend._initialize()
>>> x = torch.tensor(..., device='lazy')
>>> y = torch.tanh(x)
...
Expand Down Expand Up @@ -116,17 +119,17 @@ Finally, the compiled computation is sent to `TorchMlirBackendImpl::ExecuteCompu

## Implementing a custom backend

An example implementation of a custom backend is available [here](../examples/ltc_backend/ltc_backend).
A reference implementation of a custom backend is available [here](../python/torch_mlir/csrc/reference_lazy_backend/).
All the work involved with generating MLIR is handled in the base LTC backend, so vendors only need to worry about implementing `Compile`, `ExecuteComputation`, and some other minor methods to interface with the device.

A pybind is needed to invoke C++ code to register the autogen PyTorch kernels and the custom backend itself.
Most of the code in the example implementation should be reusable, excluding some debug related function (e.g. `get_latest_computation`).
Most of the code in the reference implementation should be reusable, excluding some debug related function (e.g. `get_latest_computation`).

## Future Expansion

There are a number of areas for future improvement:
- Generate source information in `jit::Graph` so it can be embedded in the MLIR
- Currently the example backend implementation executes via the `jit::Graph` instead of the MLIR since we currently lack lowerings for many ops, which would make it difficult to run models such as HF BERT
- Currently the reference backend implementation executes via the `jit::Graph` instead of the MLIR since we currently lack lowerings for many ops, which would make it difficult to run models such as HF BERT
- In the future, we should change the implementation to lower the MLIR to linalg and execute on a reference backend
- As new models get tested, we will inevitably run into errors related to unimplemented shape inference functions.
This problem is simply solved by implementing the missing function, or adding a structured kernel to PyTorch.
4 changes: 2 additions & 2 deletions docs/ltc_examples.md
Original file line number Diff line number Diff line change
Expand Up @@ -6,10 +6,10 @@ Refer to the main documentation [here](ltc_backend.md).
```python
import torch
import torch._lazy
import ltc_backend.ltc_backend._EXAMPLE_MLIR_BACKEND as ltc_backend
import torch_mlir.reference_lazy_backend._REFERENCE_LAZY_BACKEND as lazy_backend

# Register the example LTC backend.
ltc_backend._initialize()
lazy_backend._initialize()

device = 'lazy'

Expand Down
1 change: 0 additions & 1 deletion examples/CMakeLists.txt

This file was deleted.

26 changes: 0 additions & 26 deletions examples/ltc_backend/ltc_backend/csrc/utils/sys_utils.h

This file was deleted.

8 changes: 4 additions & 4 deletions examples/ltc_backend_bert.py
Original file line number Diff line number Diff line change
Expand Up @@ -113,8 +113,8 @@ def main(device='lazy', full_size=False):
losses = train(model, num_epochs, num_training_steps, train_dataloader, device)

# Get debug information from LTC
if 'ltc_backend' in sys.modules:
computation = ltc_backend.get_latest_computation()
if 'torch_mlir.reference_lazy_backend._REFERENCE_LAZY_BACKEND' in sys.modules:
computation = lazy_backend.get_latest_computation()
if computation:
print(computation.debug_string())

Expand Down Expand Up @@ -148,9 +148,9 @@ def main(device='lazy', full_size=False):
torch._lazy.ts_backend.init()

elif args.device == "MLIR_EXAMPLE":
import ltc_backend.ltc_backend._EXAMPLE_MLIR_BACKEND as ltc_backend
import torch_mlir.reference_lazy_backend._REFERENCE_LAZY_BACKEND as lazy_backend

ltc_backend._initialize()
lazy_backend._initialize()

device = "lazy"
print("Initialized backend")
Expand Down
8 changes: 4 additions & 4 deletions examples/ltc_backend_mnist.py
Original file line number Diff line number Diff line change
Expand Up @@ -65,8 +65,8 @@ def forward(self, x):
torch._lazy.mark_step()

# Get debug information from LTC
if 'ltc_backend' in sys.modules:
computation = ltc_backend.get_latest_computation()
if 'torch_mlir.reference_lazy_backend._REFERENCE_LAZY_BACKEND' in sys.modules:
computation = lazy_backend.get_latest_computation()
if computation:
print(computation.debug_string())

Expand All @@ -93,9 +93,9 @@ def forward(self, x):
torch._lazy.ts_backend.init()

elif args.device == "MLIR_EXAMPLE":
import ltc_backend.ltc_backend._EXAMPLE_MLIR_BACKEND as ltc_backend
import torch_mlir.reference_lazy_backend._REFERENCE_LAZY_BACKEND as lazy_backend

ltc_backend._initialize()
lazy_backend._initialize()

device = "lazy"
print("Initialized backend")
Expand Down
5 changes: 3 additions & 2 deletions python/CMakeLists.txt
Original file line number Diff line number Diff line change
Expand Up @@ -60,7 +60,8 @@ declare_mlir_python_extension(TorchMLIRPythonExtensions.Main
# Lazy Tensor Core
################################################################################

add_subdirectory(torch_mlir/csrc)
add_subdirectory(torch_mlir/csrc/base_lazy_backend)
add_subdirectory(torch_mlir/csrc/reference_lazy_backend)

################################################################################
# Optionally handle JIT IR importer.
Expand Down Expand Up @@ -152,6 +153,6 @@ endif()

# Add Torch-MLIR LTC backend as dependency
add_dependencies(TorchMLIRPythonModules torch_mlir_ltc_backend)
add_dependencies(TorchMLIRPythonModules reference_lazy_backend)

add_subdirectory(test)

Original file line number Diff line number Diff line change
Expand Up @@ -20,15 +20,15 @@ include_directories(BEFORE
link_directories("${TORCH_INSTALL_PREFIX}/lib")

set(LTC_GENERATED
base_lazy_backend/generated/LazyNativeFunctions.cpp
base_lazy_backend/generated/RegisterLazy.cpp
base_lazy_backend/generated/shape_inference.cpp
generated/LazyNativeFunctions.cpp
generated/RegisterLazy.cpp
generated/shape_inference.cpp
)
set(LTC_BACKEND_DEPENDS
base_lazy_backend/mlir_lowering_context.cpp
base_lazy_backend/mlir_native_functions.cpp
base_lazy_backend/mlir_node_lowering.cpp
base_lazy_backend/shape_inference.cpp
mlir_lowering_context.cpp
mlir_native_functions.cpp
mlir_node_lowering.cpp
shape_inference.cpp
)

# Generate Lazy IR Nodes
Expand Down Expand Up @@ -57,10 +57,10 @@ add_custom_target(
add_library(torch_mlir_ltc_backend SHARED
${LTC_GENERATED}
${LTC_BACKEND_DEPENDS}
base_lazy_backend/backend_impl.cpp
base_lazy_backend/mlir_node.cpp
base_lazy_backend/ops/device_data.cpp
base_lazy_backend/ops/generic.cpp
backend_impl.cpp
mlir_node.cpp
ops/device_data.cpp
ops/generic.cpp
)
target_compile_features(torch_mlir_ltc_backend PRIVATE cxx_std_17)

Expand Down
4 changes: 2 additions & 2 deletions python/torch_mlir/csrc/base_lazy_backend/backend_impl.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -15,12 +15,12 @@
#include <torch/csrc/lazy/backend/lowering_context.h>
#include <torch/csrc/lazy/core/shape.h>

#include "../utils/debug.h"
#include "../utils/exception.h"
#include "backend_impl.h"
#include "ir_builder.h"
#include "mlir_lowering_context.h"
#include "ops/device_data.h"
#include "utils/debug.h"
#include "utils/exception.h"

namespace torch {
namespace lazy {
Expand Down
2 changes: 1 addition & 1 deletion python/torch_mlir/csrc/base_lazy_backend/ir_builder.h
Original file line number Diff line number Diff line change
Expand Up @@ -22,7 +22,7 @@
#include "mlir_node.h"
#include "ops/device_data.h"
#include "ops/generic.h"
#include "../utils/exception.h"
#include "utils/exception.h"

// This file contains the TorchMlir IrBuilder

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -17,13 +17,13 @@
#include <torch/csrc/lazy/core/lazy_graph_executor.h>

#include "../../dialects/torch/importer/jit_ir/csrc/function_importer.h"
#include "../utils/debug.h"
#include "../utils/exception.h"
#include "backend_impl.h"
#include "mlir-c/Registration.h"
#include "mlir_lowering_context.h"
#include "mlir_node.h"
#include "torch-mlir-c/Registration.h"
#include "utils/debug.h"
#include "utils/exception.h"

namespace torch {
namespace lazy {
Expand Down
Loading

0 comments on commit 8ab9dd0

Please sign in to comment.