Skip to content

Commit

Permalink
Add generated files for unsuppported to DeformConv op
Browse files Browse the repository at this point in the history
  • Loading branch information
cjvolzka committed Aug 22, 2023
1 parent 159ba49 commit ed4d4ec
Show file tree
Hide file tree
Showing 6 changed files with 824 additions and 743 deletions.
6 changes: 3 additions & 3 deletions docs/Dialects/krnl.md
Original file line number Diff line number Diff line change
Expand Up @@ -148,9 +148,9 @@ means to block the for loop referred to by %i using a tile size of 4.
_Call operation_

The call operation provides a generic way to replace an ONNX Op with a call
to an external function at Krnl level.
`funcName` attributes determines which function to call.
`parameters` is the inputs to Krnl.Call. It includes the outputs and inputs
to an external function at Krnl level.
`funcName` attributes determines which function to call.
`parameters` is the inputs to Krnl.Call. It includes the outputs and inputs
of the ONNX Op. The outputs and inputs are already lowered to MemRefs.
The external function is assumed NOT to allocate or free any memory.
'numOfOutput` attribute to tell how manu outputs Memref in parameters.
Expand Down
48 changes: 44 additions & 4 deletions docs/Dialects/onnx.md
Original file line number Diff line number Diff line change
Expand Up @@ -1941,6 +1941,46 @@ Effects: MemoryEffects::Effect{}
| :----: | ----------- |
| `output` | tensor of 16-bit float values or tensor of 32-bit float values or tensor of 64-bit float values or tensor of bfloat16 type values

### `onnx.DeformConv` (ONNXDeformConvOp)

_ONNX DeformConv operation_

Performs deformable convolution as described in https://arxiv.org/abs/1703.06211 and https://arxiv.org/abs/1811.11168.
This operator specification supports the general N-D case. Note that most common use cases have 2D or 3D data.

Traits: AlwaysSpeculatableImplTrait

Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface), ShapeHelperOpInterface, ShapeInferenceOpInterface

Effects: MemoryEffects::Effect{}

#### Attributes:

| Attribute | MLIR Type | Description |
| :-------: | :-------: | ----------- |
| `dilations` | ::mlir::ArrayAttr | 64-bit integer array attribute
| `group` | ::mlir::IntegerAttr | 64-bit signed integer attribute
| `kernel_shape` | ::mlir::ArrayAttr | 64-bit integer array attribute
| `offset_group` | ::mlir::IntegerAttr | 64-bit signed integer attribute
| `pads` | ::mlir::ArrayAttr | 64-bit integer array attribute
| `strides` | ::mlir::ArrayAttr | 64-bit integer array attribute

#### Operands:

| Operand | Description |
| :-----: | ----------- |
| `X` | tensor of 16-bit float values or tensor of 32-bit float values or tensor of 64-bit float values
| `W` | tensor of 16-bit float values or tensor of 32-bit float values or tensor of 64-bit float values
| `offset` | tensor of 16-bit float values or tensor of 32-bit float values or tensor of 64-bit float values
| `B` | tensor of 16-bit float values or tensor of 32-bit float values or tensor of 64-bit float values or none type
| `mask` | tensor of 16-bit float values or tensor of 32-bit float values or tensor of 64-bit float values or none type

#### Results:

| Result | Description |
| :----: | ----------- |
| `Y` | tensor of 16-bit float values or tensor of 32-bit float values or tensor of 64-bit float values

### `onnx.DepthToSpace` (ONNXDepthToSpaceOp)

_ONNX DepthToSpace operation_
Expand Down Expand Up @@ -4042,15 +4082,15 @@ Effects: MemoryEffects::Effect{}

_An operation that transforms data between different layout formats_

An operation that transforms a tensor from a layout to another layout.
An operation that transforms a tensor from a layout to another layout.
A layout is defined by an attribute, i.e. `target_layout`, which allows this
operation work with an arbitrary layout (e.g. a layout used for accelerators).

`target_layout` is optional. If it is not given, the input tensor will be
transformed to a normal tensor that does not have layout.

If `target_layout` is the same as the input's layout, this operation will
become an no-op by canonicalization.
become an no-op by canonicalization.

The input and output tensors must have the same shape.

Expand Down Expand Up @@ -4657,7 +4697,7 @@ MaxPool consumes an input tensor X and applies max pooling across
```
pad_shape[i] = (output_spatial_shape[i] - 1) * strides_spatial_shape[i] + ((kernel_spatial_shape[i] - 1) * dilations[i] + 1) - input_spatial_shape[i]
```
The output of each pooling window is maximum number of elements exclude pad.
The output of each pooling window is maximum number of elements exclude pad.


Traits: AlwaysSpeculatableImplTrait
Expand Down Expand Up @@ -8985,7 +9025,7 @@ _ONNX Softmax operation_

The operator computes the normalized exponential values for the given input:

Softmax(input, axis) = Exp(input) / ReduceSum(Exp(input), axis=axis, keepdims=1)
Softmax(input, axis) = Exp(input) / ReduceSum(Exp(input), axis=axis, keepdims=1)

The \"axis\" attribute indicates the dimension along which Softmax
will be performed. The output tensor has the same shape
Expand Down
36 changes: 18 additions & 18 deletions docs/Dialects/zhigh.md
Original file line number Diff line number Diff line change
Expand Up @@ -87,14 +87,14 @@ Effects: MemoryEffects::Effect{}

ZHigh 2D convolution operation

ZHigh operation to perform 2D convolution.
ZHigh operation to perform 2D convolution.
* input: `[num_batches, height_in, width_in, channels_in]`
* input_kernel: `[kernel_height, kernel_width, channels_in, channels_out]`
* input_kernel: `[kernel_height, kernel_width, channels_in, channels_out]`
* input_bias: `[channels_out] `
* kernel_shape: 1D array of kernel height and width
* strides: 1D array of stride height and width
* padding_type: SAME_PADDING or VALID_PADDING
* act_func: ACT_NONE or ACT_RELU
* kernel_shape: 1D array of kernel height and width
* strides: 1D array of stride height and width
* padding_type: SAME_PADDING or VALID_PADDING
* act_func: ACT_NONE or ACT_RELU
* output: `[num_batches, height_out, width_out, channels_out]`

Traits: AlwaysSpeculatableImplTrait
Expand Down Expand Up @@ -185,10 +185,10 @@ ZHigh GRU operation
* Shape for input_weights is `[D, I, 3*H]`.
* Shape for hidden_weights is `[D, H, 3*H]`.
* Shape for input_bias and hidden_bias is `[D, 3*H]`.
* Shape for hn_output is `[S, D, B, H]` if return all timesteps
* Shape for hn_output is `[S, D, B, H]` if return all timesteps
and `[1, D, B, H]` if return the final step only.
* S is timesteps, D is the number of directions (1 for unidirectional and
* 2 for bidirectional), B is batch size, I is input size, and
* S is timesteps, D is the number of directions (1 for unidirectional and
* 2 for bidirectional), B is batch size, I is input size, and
* H is hidden size.
* direction accepts "forward", "reverse", or "bidirectional
* return_all_steps: -1 returns all timesteps, 0: returns only the last timestep."
Expand Down Expand Up @@ -233,11 +233,11 @@ zHigh operation to perform a LSTM.
* Shape for input_weights is `[D, I, 4*H]`.
* Shape for hidden_weights is `[D, H, 4*H]`.
* Shape for input_bias and hidden_bias is `[D, 4*H]`.
* Shape for hn_output is `[S, D, B, H]` if return all timesteps
* Shape for hn_output is `[S, D, B, H]` if return all timesteps
and `[1, D, B, H]` if return the final step only.
* Shape for cf_output is `[1, D, B, H]`.
* S is timesteps, D is the number of directions (1 for unidirectional and
* 2 for bidirectional), B is batch size, I is input size, and
* S is timesteps, D is the number of directions (1 for unidirectional and
* 2 for bidirectional), B is batch size, I is input size, and
* H is hidden size.
* direction accepts "forward", "reverse", or "bidirectional
* return_all_steps: -1 returns all timesteps, 0: returns only the last timestep.
Expand Down Expand Up @@ -387,8 +387,8 @@ Effects: MemoryEffects::Effect{}

ZHigh 2D mean reduce operation

ZHigh operation to perform 2D mean reduce. Given an input 4D tensor,
returns a downsampled tensor reducing the middle 2nd and 3rd dimensions
ZHigh operation to perform 2D mean reduce. Given an input 4D tensor,
returns a downsampled tensor reducing the middle 2nd and 3rd dimensions
to a size of 1 based on the mean of the original values.
Input and Output tensors should be in the 3D layout.

Expand Down Expand Up @@ -546,7 +546,7 @@ Effects: MemoryEffects::Effect{}
ZHigh stick operation for GRU

ZHigh operation to perform a stick for GRU.
Variadic: list of pointers for input data to be transformed:
Variadic: list of pointers for input data to be transformed:
- GRU concatenated: 3 data pointers, one for each input gate in
(Z)update, Reset, Hidden, (ZRH) gate order

Expand Down Expand Up @@ -575,9 +575,9 @@ Effects: MemoryEffects::Effect{}
ZHigh stick operation for LSTM

ZHigh operation to perform a stick for LSTM.
Variadic: list of pointers for input data to be transformed:
- LSTM concatenated: 4 data pointers, one for each input gate in
Forget, Input, Cell, Output (FICO) order,
Variadic: list of pointers for input data to be transformed:
- LSTM concatenated: 4 data pointers, one for each input gate in
Forget, Input, Cell, Output (FICO) order,

Traits: AlwaysSpeculatableImplTrait

Expand Down
4 changes: 2 additions & 2 deletions docs/Dialects/zlow.md
Original file line number Diff line number Diff line change
Expand Up @@ -525,7 +525,7 @@ Traits: MemRefsNormalizable
ZLow stick operation for GRU

ZLow operation to perform a stick for GRU.
Variadic: list of pointers for input data to be transformed:
Variadic: list of pointers for input data to be transformed:
- GRU concatenated: 3 data pointers, one for each input gate in (Z)update, Reset, Hidden, (ZRH) gate order.

Traits: MemRefsNormalizable
Expand All @@ -550,7 +550,7 @@ Traits: MemRefsNormalizable
ZLow stick operation for LSTM

ZLow operation to perform a stick for LSTM.
Variadic: list of pointers for input data to be transformed:
Variadic: list of pointers for input data to be transformed:
- LSTM concatenated: 4 data pointers, one for each input gate in Forget, Input, Cell, Output (FICO) order.

Traits: MemRefsNormalizable
Expand Down
Loading

0 comments on commit ed4d4ec

Please sign in to comment.