Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Next Ops to work on #922

Open
AlexandreEichenberger opened this issue Oct 18, 2021 · 22 comments
Open

Next Ops to work on #922

AlexandreEichenberger opened this issue Oct 18, 2021 · 22 comments
Assignees
Labels

Comments

@AlexandreEichenberger
Copy link
Collaborator

Idea: put here a quick comment to claim the operations that you care currently working on, so that we do not replicate work.
Can also add a request for new op.

@AlexandreEichenberger
Copy link
Collaborator Author

Working on compress

@tungld
Copy link
Collaborator

tungld commented Oct 19, 2021

Working on NonMaxSuppression

@etiotto
Copy link
Collaborator

etiotto commented Oct 19, 2021

I am working on SpaceToDepth (#926) and DepthToSpace (#927).

@AlexandreEichenberger
Copy link
Collaborator Author

AlexandreEichenberger commented Oct 20, 2021

FYI, here are some of the benchmarks we are focusing on and that have ops that are not working yet.

high priority: (from model zoo)

  • Roberta,
  • Bertsquad (onehot),
  • Bidaf (‘compress’, [edit: now worked on] ‘hardmax’, ‘categorymapper’ [edit: now worked on] ),
  • yolo3 nonmaxsuppression [edit: now worked on]
  • tigny-yolo3: ‘round’[edit: now supported], ‘nonmaxsuppression’ [edit: now worked on]

high priority: support compile models compiled to their lowest component (like RNNs not exported as high level ONNX ops). No crash.

medium prio: hugging face GBERTQnA

A list of ops currently not supported and present in Model Zoo are listed at the end of this issue #128

@tungld tungld pinned this issue Oct 21, 2021
@etiotto
Copy link
Collaborator

etiotto commented Oct 21, 2021

I am going to look at categorymapper (#941).

@AlexandreEichenberger
Copy link
Collaborator Author

working on one hot to work with multiple types

@tungld
Copy link
Collaborator

tungld commented Oct 26, 2021

working on Hardmax to support Bidaf. PR #950 (merged).

@chentong319
Copy link
Collaborator

working on Resize.

@mmoldawsky
Copy link
Contributor

Working on IsNaN op

@etiotto
Copy link
Collaborator

etiotto commented Apr 14, 2022

Working on ScatterElements (needed by fasterrcnn-10.onnx, maskrcnn-10.onnx).
PR is #1352

Scatter is deprecated but we map it to ScatterElements. PR is #1337

@etiotto
Copy link
Collaborator

etiotto commented Apr 20, 2022

Working on ScatterND. PR is #1370

@etiotto
Copy link
Collaborator

etiotto commented Apr 27, 2022

Implemented GatherElements. PR is #1375.

@etiotto
Copy link
Collaborator

etiotto commented Apr 27, 2022

Working on GahterND. PR is #1382.

@AlexandreEichenberger
Copy link
Collaborator Author

Status of implemented ops are listed here now: https://github.com/onnx/onnx-mlir/blob/main/docs/SupportedONNXOps-cpu.md

@airMeng
Copy link
Contributor

airMeng commented Sep 23, 2022

Hi, thank you for your excellent work!
I am quite new to MLIR so the questions may be stupid, please never mind. I see ArgMax is supported in onnx-mlir but ArgMin not, is there any special issue for ArgMin? If not, can I open a PR about ArgMin just based on ArgMax with little modification?

@etiotto etiotto removed their assignment Sep 23, 2022
@tungld
Copy link
Collaborator

tungld commented Sep 26, 2022

@airMeng please go ahead with a PR for ArgMin. Thank you!

@Ris-Bali
Copy link

Hi, can I work on celu op ?

@muzafferkal
Copy link

Somebody please support QuantizeLinear/DequantizeLinear ops for quantized networks.

@srcarroll
Copy link
Contributor

is anyone working on extending the decompose-onnx pass (or a similar pass) to support more onnx.Custom ops? In particular i am trying to compile a basic backwards graphs and get InPlaceAccumulatorV2 custom op after converting onnx to mlir with onnx-mlir --EmitONNXIR.

@AlexandreEichenberger
Copy link
Collaborator Author

@srcarroll CustomOps is a non-standard op that @chentong319 added to easily convert a custom op into a function call. While Tong knows best, my recollection (and I may be wrong here) is that custom ops are mainly generated within onnx-mlir, and not parsed in from an ONNX protobuf.

Tong is away for a bit, if you wanted to add support for more custom ops, we would certainly be interested in taking in the changes. There are also ONNX functions, maybe that may help too.

@srcarroll
Copy link
Contributor

srcarroll commented Jun 24, 2024

@AlexandreEichenberger thanks for the response. i'd be happy add support, but I can't find any info on the definition of InPlaceAccumulatorV2. do you know where i can find that?

could you also point me to the ONNX functions you are referring to and how to emit them? thanks

@AlexandreEichenberger
Copy link
Collaborator Author

I could not find info about your "InPlaceAccumulatorV2", it does not appears in the ONNX specs or what I could find about the OnnxRuntimeExtensions, though I may have overlooked something in the ORT as I am not very familiar with it. How did you create the ONNX graph?

As far as creating custom ONNX functions, you can see a reference here in the ORT literature as a preferred way to make new custom ops: https://onnxruntime.ai/docs/reference/operators/add-custom-op.html

The ONNX specs also have a section on ONNX functions: https://onnx.ai/onnx/intro/concepts.html#functions

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

9 participants