-
Notifications
You must be signed in to change notification settings - Fork 170
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Adding facebook dlrm in SHARK/tank #185
base: main
Are you sure you want to change the base?
Conversation
Hi @vid-999 I saw in the torch IR that there are some lowering of the OPs missing like |
Hi @pashu123 thanks for pointing that out. I am actually in the process of writing the linalg lowering for the aten.embedding_bag.padding_idx op. Also, maybe I misunderstood but when you say "try" you mean try to lower from TORCH to LINALG ? |
Yes, I mean torch to linalg. Also, since |
|
Cool! |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I would avoid checking in the .linalg and .torch . Maybe add a comment on how to generate it
We can save the generated linalg, input, and golden output in the shark tank and then enable the tests. |
should just add to the gen_shark_tank.py script right ? the upload should happen via @dan-garvey nightly build change. |
I added 2 manual tests for QrEmbedding and PrEmbedding modes. For the QrEmbedding mode, one more op needs to be implemented: |
WIP: dlrm lowering through shark.
Failing to lower to Torch right now.
Exception:
Lowering TorchScript IR -> Torch Backend IR failed with the following diagnostics:
error: found a non-value tensor type, this is likely due to a missing case in the MaximizeValueSemantics pass
note: see current operation: %25 = "torch.copy.to_tensor"(%24) : (!torch.vtensor) -> !torch.tensor