Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Release builds seem to take longer (sometimes) with new docker changes #1322

Open
powderluv opened this issue Aug 31, 2022 · 1 comment
Open
Assignees

Comments

@powderluv
Copy link
Collaborator

Before:
2hr 15m https://github.com/llvm/torch-mlir/actions/runs/2955716430

after:
varies but sometimes times out at 6hr. and may pass on a re-run.

@powderluv powderluv self-assigned this Aug 31, 2022
@powderluv
Copy link
Collaborator Author

could be the --ipc=host setting or the ulimit (required for tests)

qedawkins pushed a commit to nod-ai/torch-mlir that referenced this issue Oct 3, 2022
Makes it easy to declare shape helper classes of ONNX operators and reduce boilerplate code. 
As an example, to declare  shape helpers for the ONNXArgMinOp &  ONNXArgMaxOp:

  DECLARE_SHAPE_HELPER(ONNXArgMaxOp)
  DECLARE_SHAPE_HELPER(ONNXArgMinOp)

Implement shape inference for the ArgMinOp ONNX operator.

Signed-off-by: Ettore Tiotto <etiotto@ca.ibm.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant