-
Notifications
You must be signed in to change notification settings - Fork 115
Custom Op TPU #53
Comments
Had a brief chat with @frankchn and sounds like custom ops are not supported on TPU yet. ''' |
In the TPU FAQ I see
|
While it is technically possible to write an XLA HLO op and get it to run on TPUs, we currently don't expose any way to load arbitrary user written HLO ops onto the TPU system itself. This may change in future releases, but we don't have anything to announce today. |
@frankchn Ok. can you reach anyone internally to fix the FAQ? Cause with that text seems that currently there is "an undocumented" path to build that custom ops. |
Yup, working on it. Thanks for bringing that to our attention! |
Thanks |
Is there a way to specify a fall-back option for TPUs? I have an optimized custom op for CPUs and GPUs, but I want my custom op to be able to run, even in-efficiently, on TPUs. The op can be specified with standard TF operations, so I'm just looking for a way to register a python function as the TPU implementation of the op. Is this possible? |
@orsharir Would it be possible to just encapsulate the op you want in a Python function, and then just switch between your custom op implementation and the TF default op implementation using flags?
|
This sounds to me like you could load a custom XLA HLO op on TPU with out modifying the |
@XMaster96 Unfortunately not because the underlying TPU ISA and associated tools for you to be able to write a XLA op isn't exposed even with the TPU VM preview. |
@frankchn ok, thanks |
You can load custom ops that someone else (or you) has written so long as they are CPU custom ops (or build them into a custom TF build). I don't think anyone outside of Google can write XLA custom ops that run on the TPU. |
@bhack Since JAX already has Pallas to write the TPU kernel, is there any plan for TensorFlow for a similar feature? |
Can we add in the example something related to TPU.
There was a FAQ about creating custom ops for TPU https://cloud.google.com/tpu/docs/faq
The text was updated successfully, but these errors were encountered: