Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Remove VectorType and legacy loop vectorization #4096

Closed
re-xyr opened this issue Jan 24, 2022 · 10 comments · Fixed by #5918
Closed

Remove VectorType and legacy loop vectorization #4096

re-xyr opened this issue Jan 24, 2022 · 10 comments · Fixed by #5918
Labels
feature request Suggest an idea on this project

Comments

@re-xyr
Copy link
Contributor

re-xyr commented Jan 24, 2022

Since vectors are just 1-d tensors, can we merge these two datatypes?

  • From what I observe, vector splitting doesn't currently work on tensors. Will there be a problem if we extend that to tensors?
  • What other distinctions between vectors and tensors may block this change?
@re-xyr re-xyr added the feature request Suggest an idea on this project label Jan 24, 2022
@k-ye
Copy link
Member

k-ye commented Jan 24, 2022

+1 on this.

IIUC, VectorType was originally created for CPU vectorization, which is not in use for a long time. I'd suggest us to unify the representation of vector/matrix types using TensorType. When we have enough time for restoring CPU vectorization, let's give it a more explicit name.

@strongoier
Copy link
Contributor

strongoier commented Jan 24, 2022

Currently our vectors/matrices correspond to TensorType. Feel free to ignore VectorType for now as it simply exists historically and is not used in the codebase.

@re-xyr
Copy link
Contributor Author

re-xyr commented Jan 24, 2022

When I searched for VectorType usage, apparently there is a loop vectorization pass still using that.

However, that pass is never really used; whenever it is used (that is, on arch=cpu), vector width is always 1 (that is, a no-op). Should we remove that pass altogether?

@strongoier
Copy link
Contributor

I'm not sure if that legacy code (https://github.com/taichi-dev/taichi/blob/master/taichi/transforms/loop_vectorize.cpp) is useful for future CPU vectorization implementation. @k-ye @turbo0628

@re-xyr re-xyr changed the title Merge TensorType and VectorType Remove VectorType and legacy loop vectorization Jan 24, 2022
@re-xyr
Copy link
Contributor Author

re-xyr commented Jan 24, 2022

There is a bit_loop_vectorize pass which is not unused. Is that a successor of the old vectorization pass?

@strongoier
Copy link
Contributor

That one is for bit array vectorization as described in https://yuanming.taichi.graphics/publication/2021-quantaichi/quantaichi.pdf.

@k-ye
Copy link
Member

k-ye commented Jan 24, 2022

That one is for bit array vectorization as described in https://yuanming.taichi.graphics/publication/2021-quantaichi/quantaichi.pdf.

For the purpose of reducing dev confusion, let's just remove it for now. We can always get back the code if necessary.

@re-xyr
Copy link
Contributor Author

re-xyr commented Jan 24, 2022

@k-ye There are also code dealing with vector "width" (which is always 1 at this point) throughout the codebase. Should we keep them as is (in the hope we get vectorization back in the future) or remove them as well? (Most of them are in IR-to-backend passes or IR-to-IR optimization passes. I'm unfamiliar with those so if we're going to do that too it'll be better to find someone else.)

@k-ye
Copy link
Member

k-ye commented Jan 24, 2022

I believe the vector width code is more or less dealing with quant rather than vectorization? @Hanke98 . If so, let's keep them.

@re-xyr
Copy link
Contributor Author

re-xyr commented Jan 24, 2022

I changed the width() method to returning 1 right away and all tests still passed. Moreover width() only takes care of VectorType (which quant is not using), so I think it's not?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
feature request Suggest an idea on this project
Projects
None yet
3 participants