Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[differentiable] make_adjoint pass should support mutable local variables and complex branching #581

Closed
yuanming-hu opened this issue Mar 11, 2020 · 2 comments
Assignees
Labels
feature request Suggest an idea on this project

Comments

@yuanming-hu
Copy link
Member

(need to come up with a systematic solution.)

@yuanming-hu yuanming-hu added the feature request Suggest an idea on this project label Mar 11, 2020
@archibate
Copy link
Collaborator

archibate commented Mar 13, 2020

Btw, what does differentiable mean? I guess it's related to kernel.grad()? Was this a unique feature of taichi but no others like tvm? Sounds interesting! A recent issue #571 says if is not allowed in grad kernels, and I told him/her to use max/min instead. Expected to be resolved by this issue?

@yuanming-hu yuanming-hu self-assigned this Mar 14, 2020
@yuanming-hu
Copy link
Member Author

yuanming-hu commented Mar 14, 2020

Btw, what does differentiable mean? I guess it's related to kernel.grad()?

Exactly!

Was this a unique feature of taichi but no others like tvm?

Yes, because Taichi has an imperative mega-kernel design. This is different from coarser-grained systems such as tensorflow/Halide/tvm.

A recent issue #571 says if is not allowed in grad kernels, and I told him/her to use max/min instead. Expected to be resolved by this issue?

True.

I just came up with a solution and will try to implement it tomorrow.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
feature request Suggest an idea on this project
Projects
None yet
Development

No branches or pull requests

2 participants