Skip to content

Latest commit

 

History

History
144 lines (144 loc) · 61.9 KB

Operators.md

File metadata and controls

144 lines (144 loc) · 61.9 KB

Operator Schemas

#Table of Contents This file is automatically generated from the def files via this script. Do not modify directly and instead edit operator definitions. |Operator |Input |Output |Type Constraint | Version | |-|-|-|-|-|-| |Abs|Input tensor|Output tensor|tensor(uint8), tensor(uint16), tensor(uint32), tensor(uint64), tensor(int8), tensor(int16), tensor(int32), tensor(int64), tensor(float16), tensor(float), tensor(double)Constrain input and output types to all numeric tensors.|6| |Acos|Input tensor|The arccosine of the input tensor computed element-wise|tensor(float16), tensor(float), tensor(double)Constrain input and output types to float tensors.|7| |Acosh|Input tensor|The hyperbolic arccosine values of the input tensor computed element-wise|tensor(float16), tensor(float), tensor(double)Constrain input and output types to float tensors.|9| |Add|First operand.
Second operand.|Result, has same element type as two inputs|tensor(uint32), tensor(uint64), tensor(int32), tensor(int64), tensor(float16), tensor(float), tensor(double)Constrain input and output types to high-precision numeric tensors.|7| |And|First input operand for the logical operator.
Second input operand for the logical operator.|Result tensor.|tensor(bool)Constrains input to boolean tensor.
tensor(bool)Constrains output to boolean tensor.|7| |ArgMax|An input tensor.|Reduced output tensor with integer data type.|tensor(uint8), tensor(uint16), tensor(uint32), tensor(uint64), tensor(int8), tensor(int16), tensor(int32), tensor(int64), tensor(float16), tensor(float), tensor(double)Constrain input and output types to all numeric tensors.|1| |ArgMin|An input tensor.|Reduced output tensor with integer data type.|tensor(uint8), tensor(uint16), tensor(uint32), tensor(uint64), tensor(int8), tensor(int16), tensor(int32), tensor(int64), tensor(float16), tensor(float), tensor(double)Constrain input and output types to all numeric tensors.|1| |Asin|Input tensor|The arcsine of the input tensor computed element-wise|tensor(float16), tensor(float), tensor(double)Constrain input and output types to float tensors.|7| |Asinh|Input tensor|The hyperbolic arcsine values of the input tensor computed element-wise|tensor(float16), tensor(float), tensor(double)Constrain input and output types to float tensors.|9| |Atan|Input tensor|The arctangent of the input tensor computed element-wise|tensor(float16), tensor(float), tensor(double)Constrain input and output types to float tensors.|7| |Atanh|Input tensor|The hyperbolic arctangent values of the input tensor computed element-wise|tensor(float16), tensor(float), tensor(double)Constrain input and output types to float tensors.|9| |AveragePool|Input data tensor from the previous operator; dimensions for image case are (N x C x H x W), where N is the batch size, C is the number of channels, and H and W are the height and the width of the data. For non image case, the dimensions are in the form of (N x C x D1 x D2 ... Dn), where N is the batch size. Optionally, if dimension denotation is in effect, the operation expects the input data tensor to arrive with the dimension denotation of [DATA_BATCH, DATA_CHANNEL, DATA_FEATURE, DATA_FEATURE ...].|Output data tensor from average or max pooling across the input tensor. Dimensions will vary based on various kernel, stride, and pad sizes. Floor value of the dimension is used|tensor(float16), tensor(float), tensor(double)Constrain input and output types to float tensors.|10| |BatchNormalization|Input data tensor from the previous operator; dimensions are in the form of (N x C x D1 x D2 ... Dn), where N is the batch size, C is the number of channels. Statistics are computed for every channel of C over N and D1 to Dn dimensions. For image data, input dimensions become (N x C x H x W). The op also accepts single dimension input of size N in which case C is assumed to be 1
Scale tensor of shape (C).
Bias tensor of shape (C).
running (training) or estimated (testing) mean tensor of shape (C).
running (training) or estimated (testing) variance tensor of shape (C). (1 - 5)|The output tensor of the same shape as X
The running mean after the BatchNormalization operator.
The running variance after the BatchNormalization operator.
Saved mean used during training to speed up gradient computation.
Saved variance used during training to speed up gradient computation.|tensor(float16), tensor(float), tensor(double)Constrain input and output types to float tensors.|9| |Cast|Input tensor to be cast.|Output tensor with the same shape as input with type specified by the 'to' argument|tensor(float16), tensor(float), tensor(double), tensor(int8), tensor(int16), tensor(int32), tensor(int64), tensor(uint8), tensor(uint16), tensor(uint32), tensor(uint64), tensor(bool), tensor(string)Constrain input types. Casting from complex is not supported.
tensor(float16), tensor(float), tensor(double), tensor(int8), tensor(int16), tensor(int32), tensor(int64), tensor(uint8), tensor(uint16), tensor(uint32), tensor(uint64), tensor(bool), tensor(string)Constrain output types. Casting to complex is not supported.|9| |Ceil|Input tensor|Output tensor|tensor(float16), tensor(float), tensor(double)Constrain input and output types to float tensors.|6| |Clip|Input tensor whose elements to be clipped|Output tensor with clipped input elements|tensor(float16), tensor(float), tensor(double)Constrain input and output types to float tensors.|6| |Compress|Tensor of rank r >= 1.
Rank 1 tensor of booleans to indicate which slices or data elements to be selected. Its length can be less than the input length alone the axis or the flattened input size if axis is not specified. In such cases data slices or elements exceeding the condition length are discarded.|Tensor of rank r if axis is specified. Otherwise output is a Tensor of rank 1.|tensor(uint8), tensor(uint16), tensor(uint32), tensor(uint64), tensor(int8), tensor(int16), tensor(int32), tensor(int64), tensor(float16), tensor(float), tensor(double), tensor(string), tensor(bool), tensor(complex64), tensor(complex128)Constrain input and output types to all tensor types.
tensor(bool)Constrains to boolean tensors.|9| |Concat|(1 - ∞) List of tensors for concatenation|Concatenated tensor|tensor(uint8), tensor(uint16), tensor(uint32), tensor(uint64), tensor(int8), tensor(int16), tensor(int32), tensor(int64), tensor(float16), tensor(float), tensor(double), tensor(string), tensor(bool), tensor(complex64), tensor(complex128)Constrain output types to any tensor type.|4| |Constant||Output tensor containing the same value of the provided tensor.|tensor(uint8), tensor(uint16), tensor(uint32), tensor(uint64), tensor(int8), tensor(int16), tensor(int32), tensor(int64), tensor(float16), tensor(float), tensor(double), tensor(string), tensor(bool), tensor(complex64), tensor(complex128)Constrain input and output types to all tensor types.|9| |ConstantOfShape|1D tensor. The shape of the expected output tensor. If empty tensor is given, the output would be a scalar.|Output tensor of shape specified by 'input'.If attribute 'value' is specified, the value and datatype of the output tensor is taken from 'value'.If attribute 'value' is not specified, the value in the output defaults to 0, and the datatype defaults to float32.|tensor(int64)Constrain input types.
tensor(float16), tensor(float), tensor(double), tensor(int8), tensor(int16), tensor(int32), tensor(int64), tensor(uint8), tensor(uint16), tensor(uint32), tensor(uint64), tensor(bool)Constrain output types to be numerics.|9| |Conv|(2 - 3) Input data tensor from previous layer; has size (N x C x H x W), where N is the batch size, C is the number of channels, and H and W are the height and width. Note that this is for the 2D image. Otherwise the size is (N x C x D1 x D2 ... x Dn). Optionally, if dimension denotation is in effect, the operation expects input data tensor to arrive with the dimension denotation of [DATA_BATCH, DATA_CHANNEL, DATA_FEATURE, DATA_FEATURE ...].
The weight tensor that will be used in the convolutions; has size (M x C/group x kH x kW), where C is the number of channels, and kH and kW are the height and width of the kernel, and M is the number of feature maps. For more than 2 dimensions, the kernel shape will be (M x C/group x k1 x k2 x ... x kn), where (k1 x k2 x ... kn) is the dimension of the kernel. Optionally, if dimension denotation is in effect, the operation expects the weight tensor to arrive with the dimension denotation of [FILTER_OUT_CHANNEL, FILTER_IN_CHANNEL, FILTER_SPATIAL, FILTER_SPATIAL ...]. X.shape[1] == (W.shape[1] * group) == C (assuming zero based indices for the shape array). Or in other words FILTER_IN_CHANNEL should be equal to DATA_CHANNEL.
Optional 1D bias to be added to the convolution, has size of M.|Output data tensor that contains the result of the convolution. The output dimensions are functions of the kernel size, stride size, and pad lengths.|tensor(float16), tensor(float), tensor(double)Constrain input and output types to float tensors.|1| |ConvInteger|(2 - 4) Input data tensor from previous layer; has size (N x C x H x W), where N is the batch size, C is the number of channels, and H and W are the height and width. Note that this is for the 2D image. Otherwise the size is (N x C x D1 x D2 ... x Dn). Optionally, if dimension denotation is in effect, the operation expects input data tensor to arrive with the dimension denotation of [DATA_BATCH, DATA_CHANNEL, DATA_FEATURE, DATA_FEATURE ...].
The weight tensor that will be used in the convolutions; has size (M x C/group x kH x kW), where C is the number of channels, and kH and kW are the height and width of the kernel, and M is the number of feature maps. For more than 2 dimensions, the kernel shape will be (M x C/group x k1 x k2 x ... x kn), where (k1 x k2 x ... kn) is the dimension of the kernel. Optionally, if dimension denotation is in effect, the operation expects the weight tensor to arrive with the dimension denotation of [FILTER_OUT_CHANNEL, FILTER_IN_CHANNEL, FILTER_SPATIAL, FILTER_SPATIAL ...]. X.shape[1] == (W.shape[1] * group) == C (assuming zero based indices for the shape array). Or in other words FILTER_IN_CHANNEL should be equal to DATA_CHANNEL.
Zero point tensor for input 'x'. It's optional and default value is 0. It's a scalar, which means a per-tensor/layer quantization.
Scale tensor for input 'w'. It's optional and default value is 0. It could be a scalar or a 1-D tensor, which means a per-tensor/layer or per output channel quantization. If it's a 1-D tensor, its number of elements should be equal to the number of output channels (M)|Output data tensor that contains the result of the convolution. The output dimensions are functions of the kernel size, stride size, and pad lengths.|tensor(int8), tensor(uint8)Constrain input x and its zero point data type to 8-bit integer tensor.
tensor(int8), tensor(uint8)Constrain input w and its zero point data type to 8-bit integer tensor.
tensor(int32)Constrain output y data type to 32-bit integer tensor.|10| |ConvTranspose|(2 - 3) Input data tensor from previous layer; has size (N x C x H x W), where N is the batch size, C is the number of channels, and H and W are the height and width. Note that this is for the 2D image. Otherwise the size is (N x C x D1 x D2 ... x Dn)
The weight tensor that will be used in the convolutions; has size (C x M/group x kH x kW), where C is the number of channels, and kH and kW are the height and width of the kernel, and M is the number of feature maps. For more than 2 dimensions, the weight shape will be (C x M/group x k1 x k2 x ... x kn), where (k1 x k2 x ... x kn) is the dimension of the kernel. The number of channels in the output should be equal to W.shape[1] * group (assuming zero based indices of the shape array)
Optional 1D bias to be added to the convolution, has size of M.|Output data tensor that contains the result of the convolution. The output dimensions are functions of the kernel size, stride size, pad lengths and group count. The number of channels in the output should be equal to W.shape[1] * group (assuming zero based indices of the shape array)|tensor(float16), tensor(float), tensor(double)Constrain input and output types to float tensors.|1| |Cos|Input tensor|The cosine of the input tensor computed element-wise|tensor(float16), tensor(float), tensor(double)Constrain input and output types to float tensors.|7| |Cosh|Input tensor|The hyperbolic cosine values of the input tensor computed element-wise|tensor(float16), tensor(float), tensor(double)Constrain input and output types to float tensors.|9| |DepthToSpace|Input tensor of [N,C,H,W], where N is the batch axis, C is the channel or depth, H is the height and W is the width.|Output tensor of [N, C/(blocksize * blocksize), H * blocksize, W * blocksize].|tensor(uint8), tensor(uint16), tensor(uint32), tensor(uint64), tensor(int8), tensor(int16), tensor(int32), tensor(int64), tensor(float16), tensor(float), tensor(double), tensor(string), tensor(bool), tensor(complex64), tensor(complex128)Constrain input and output types to all tensor types.|1| |DequantizeLinear|(2 - 3) N-D quantized input tensor to be de-quantized.
Scale for input 'x'. It's a scalar, which means a per-tensor/layer quantization.
Zero point for input 'x'. It's a scalar, which means a per-tensor/layer quantization. It's optional. 0 is the default value when it's not specified.|N-D full precision output tensor. It has same shape as input 'x'.|tensor(int8), tensor(uint8), tensor(int32)Constrain 'x_zero_point' and 'x' to 8-bit/32-bit integer tensor.|10| |Div|First operand.
Second operand.|Result, has same element type as two inputs|tensor(uint32), tensor(uint64), tensor(int32), tensor(int64), tensor(float16), tensor(float), tensor(double)Constrain input and output types to high-precision numeric tensors.|7| |Dropout|The input data as Tensor. (1 - 2)|The output.
The output mask.|tensor(float16), tensor(float), tensor(double)Constrain input and output types to float tensors.
tensor(bool)Constrain output mask types to boolean tensors.|10| |Elu|1D input tensor|1D input tensor|tensor(float16), tensor(float), tensor(double)Constrain input and output types to float tensors.|6| |Equal|First input operand for the logical operator.
Second input operand for the logical operator.|Result tensor.|tensor(bool), tensor(int32), tensor(int64)Constrains input to integral tensors.
tensor(bool)Constrains output to boolean tensor.|7| |Erf|Input tensor|The error function of the input tensor computed element-wise. It has the same shape and type of the input.|tensor(uint8), tensor(uint16), tensor(uint32), tensor(uint64), tensor(int8), tensor(int16), tensor(int32), tensor(int64), tensor(float16), tensor(float), tensor(double)Constrain input and output types to all numeric tensors.|9| |Exp|Input tensor|The exponential of the input tensor computed element-wise|tensor(float16), tensor(float), tensor(double)Constrain input and output types to float tensors.|6| |Expand|Input tensor
A 1-D tensor indicates the shape you want to expand to, following the broadcast rule|Output tensor|tensor(uint8), tensor(uint16), tensor(uint32), tensor(uint64), tensor(int8), tensor(int16), tensor(int32), tensor(int64), tensor(float16), tensor(float), tensor(double), tensor(string), tensor(bool), tensor(complex64), tensor(complex128)Constrain input and output types to all tensors.|8| |EyeLike|2D input tensor to copy shape, and optionally, type information from.|Output tensor, same shape as input tensor T1.|tensor(float16), tensor(float), tensor(double), tensor(int8), tensor(int16), tensor(int32), tensor(int64), tensor(uint8), tensor(uint16), tensor(uint32), tensor(uint64), tensor(bool)Constrain input types. Strings and complex are not supported.
tensor(float16), tensor(float), tensor(double), tensor(int8), tensor(int16), tensor(int32), tensor(int64), tensor(uint8), tensor(uint16), tensor(uint32), tensor(uint64), tensor(bool)Constrain output types. Strings and complex are not supported.|9| |Flatten|A tensor of rank >= axis.|A 2D tensor with the contents of the input tensor, with input dimensions up to axis flattened to the outer dimension of the output and remaining input dimensions flattened into the inner dimension of the output.|tensor(uint8), tensor(uint16), tensor(uint32), tensor(uint64), tensor(int8), tensor(int16), tensor(int32), tensor(int64), tensor(float16), tensor(float), tensor(double), tensor(string), tensor(bool), tensor(complex64), tensor(complex128)Constrain input and output to all tensor types.|9| |Floor|Input tensor|Output tensor|tensor(float16), tensor(float), tensor(double)Constrain input and output types to float tensors.|6| |GRU|(3 - 6) The input sequences packed (and potentially padded) into one 3-D tensor with the shape of [seq_length, batch_size, input_size].
The weight tensor for the gates. Concatenation of W[zrh] and WB[zrh] (if bidirectional) along dimension 0. This tensor has shape [num_directions, 3*hidden_size, input_size].
The recurrence weight tensor. Concatenation of R[zrh] and RB[zrh] (if bidirectional) along dimension 0. This tensor has shape [num_directions, 3*hidden_size, hidden_size].
The bias tensor for the gates. Concatenation of [Wb[zrh], Rb[zrh]] and [WBb[zrh], RBb[zrh]] (if bidirectional) along dimension 0. This tensor has shape [num_directions, 6*hidden_size]. Optional: If not specified - assumed to be 0
Optional tensor specifying lengths of the sequences in a batch. If not specified - assumed all sequences in the batch to have length seq_length. It has shape [batch_size].
Optional initial value of the hidden. If not specified - assumed to be 0. It has shape [num_directions, batch_size, hidden_size]. (0 - 2)|A tensor that concats all the intermediate output values of the hidden. It has shape [seq_length, num_directions, batch_size, hidden_size].
The last output value of the hidden. It has shape [num_directions, batch_size, hidden_size].|tensor(float16), tensor(float), tensor(double)Constrain input and output types to float tensors.
tensor(int32)Constrain seq_lens to integer tensor.|7| |Gather|Tensor of rank r >= 1.
Tensor of int32/int64 indices, of any rank q.|Tensor of rank q + (r - 1).|tensor(uint8), tensor(uint16), tensor(uint32), tensor(uint64), tensor(int8), tensor(int16), tensor(int32), tensor(int64), tensor(float16), tensor(float), tensor(double), tensor(string), tensor(bool), tensor(complex64), tensor(complex128)Constrain input and output types to any tensor type.
tensor(int32), tensor(int64)Constrain indices to integer types|1| |Gemm|Input tensor A. The shape of A should be (M, K) if transA is 0, or (K, M) if transA is non-zero.
Input tensor B. The shape of B should be (K, N) if transB is 0, or (N, K) if transB is non-zero.
Input tensor C. The shape of C should be unidirectional broadcastable to (M, N).|Output tensor of shape (M, N).|tensor(float16), tensor(float), tensor(double), tensor(uint32), tensor(uint64), tensor(int32), tensor(int64)Constrain input and output types to float/int tensors.|9| |GlobalAveragePool|Input data tensor from the previous operator; dimensions for image case are (N x C x H x W), where N is the batch size, C is the number of channels, and H and W are the height and the width of the data. For non image case, the dimensions are in the form of (N x C x D1 x D2 ... Dn), where N is the batch size.|Output data tensor from pooling across the input tensor. Dimensions will be N x C x 1 x 1|tensor(float16), tensor(float), tensor(double)Constrain input and output types to float tensors.|1| |GlobalLpPool|Input data tensor from the previous operator; dimensions for image case are (N x C x H x W), where N is the batch size, C is the number of channels, and H and W are the height and the width of the data. For non image case, the dimensions are in the form of (N x C x D1 x D2 ... Dn), where N is the batch size.|Output data tensor from pooling across the input tensor. Dimensions will be N x C x 1 x 1|tensor(float16), tensor(float), tensor(double)Constrain input and output types to float tensors.|2| |GlobalMaxPool|Input data tensor from the previous operator; dimensions for image case are (N x C x H x W), where N is the batch size, C is the number of channels, and H and W are the height and the width of the data. For non image case, the dimensions are in the form of (N x C x D1 x D2 ... Dn), where N is the batch size.|Output data tensor from pooling across the input tensor. Dimensions will be N x C x 1 x 1|tensor(float16), tensor(float), tensor(double)Constrain input and output types to float tensors.|1| |Greater|First input operand for the logical operator.
Second input operand for the logical operator.|Result tensor.|tensor(uint8), tensor(uint16), tensor(uint32), tensor(uint64), tensor(int8), tensor(int16), tensor(int32), tensor(int64), tensor(float16), tensor(float), tensor(double)Constrains input types to all numeric tensors.
tensor(bool)Constrains output to boolean tensor.|9| |HardSigmoid|Input tensor|Output tensor|tensor(float16), tensor(float), tensor(double)Constrain input and output types to float tensors.|6| |Hardmax|The input tensor that's coerced into a 2D matrix of size (NxD) as described above.|The output values with the same shape as input tensor (the original size without coercion).|tensor(float16), tensor(float), tensor(double)Constrain input and output types to float tensors.|1| |Identity|Input tensor|Tensor to copy input into.|tensor(uint8), tensor(uint16), tensor(uint32), tensor(uint64), tensor(int8), tensor(int16), tensor(int32), tensor(int64), tensor(float16), tensor(float), tensor(double), tensor(string), tensor(bool), tensor(complex64), tensor(complex128)Constrain input and output types to all tensor types.|1| |If|Condition for the if (1 - ∞)|Values that are live-out to the enclosing scope. The return values in the then_branch and else_branch must be of the same shape and same data type.|tensor(uint8), tensor(uint16), tensor(uint32), tensor(uint64), tensor(int8), tensor(int16), tensor(int32), tensor(int64), tensor(float16), tensor(float), tensor(double), tensor(string), tensor(bool), tensor(complex64), tensor(complex128)All Tensor types
tensor(bool)Only bool|1| |InstanceNormalization|Input data tensor from the previous operator; dimensions for image case are (N x C x H x W), where N is the batch size, C is the number of channels, and H and W are the height and the width of the data. For non image case, the dimensions are in the form of (N x C x D1 x D2 ... Dn), where N is the batch size.
The input 1-dimensional scale tensor of size C.
The input 1-dimensional bias tensor of size C.|The output tensor of the same shape as input.|tensor(float16), tensor(float), tensor(double)Constrain input and output types to float tensors.|6| |IsInf|input|output|tensor(float), tensor(double)Constrain input types to float tensors.
tensor(bool)Constrain output types to boolean tensors.|10| |IsNaN|input|output|tensor(float16), tensor(float), tensor(double)Constrain input types to float tensors.
tensor(bool)Constrain output types to boolean tensors.|9| |LRN|Input data tensor from the previous operator; dimensions for image case are (N x C x H x W), where N is the batch size, C is the number of channels, and H and W are the height and the width of the data. For non image case, the dimensions are in the form of (N x C x D1 x D2 ... Dn), where N is the batch size. Optionally, if dimension denotation is in effect, the operation expects the input data tensor to arrive with the dimension denotation of [DATA_BATCH, DATA_CHANNEL, DATA_FEATURE, DATA_FEATURE ...].|Output tensor, which has the shape and type as input tensor|tensor(float16), tensor(float), tensor(double)Constrain input and output types to float tensors.|1| |LSTM|(3 - 8) The input sequences packed (and potentially padded) into one 3-D tensor with the shape of [seq_length, batch_size, input_size].
The weight tensor for the gates. Concatenation of W[iofc] and WB[iofc] (if bidirectional) along dimension 0. The tensor has shape [num_directions, 4*hidden_size, input_size].
The recurrence weight tensor. Concatenation of R[iofc] and RB[iofc] (if bidirectional) along dimension 0. This tensor has shape [num_directions, 4*hidden_size, hidden_size].
The bias tensor for input gate. Concatenation of [Wb[iofc], Rb[iofc]], and [WBb[iofc], RBb[iofc]] (if bidirectional) along dimension 0. This tensor has shape [num_directions, 8*hidden_size]. Optional: If not specified - assumed to be 0.
Optional tensor specifying lengths of the sequences in a batch. If not specified - assumed all sequences in the batch to have length seq_length. It has shape [batch_size].
Optional initial value of the hidden. If not specified - assumed to be 0. It has shape [num_directions, batch_size, hidden_size].
Optional initial value of the cell. If not specified - assumed to be 0. It has shape [num_directions, batch_size, hidden_size].
The weight tensor for peepholes. Concatenation of P[iof] and PB[iof] (if bidirectional) along dimension 0. It has shape [num_directions, 3*hidde_size]. Optional: If not specified - assumed to be 0. (0 - 3)|A tensor that concats all the intermediate output values of the hidden. It has shape [seq_length, num_directions, batch_size, hidden_size].
The last output value of the hidden. It has shape [num_directions, batch_size, hidden_size].
The last output value of the cell. It has shape [num_directions, batch_size, hidden_size].|tensor(float16), tensor(float), tensor(double)Constrain input and output types to float tensors.
tensor(int32)Constrain seq_lens to integer tensor.|7| |LeakyRelu|Input tensor|Output tensor|tensor(float16), tensor(float), tensor(double)Constrain input and output types to float tensors.|6| |Less|First input operand for the logical operator.
Second input operand for the logical operator.|Result tensor.|tensor(uint8), tensor(uint16), tensor(uint32), tensor(uint64), tensor(int8), tensor(int16), tensor(int32), tensor(int64), tensor(float16), tensor(float), tensor(double)Constrains input types to all numeric tensors.
tensor(bool)Constrains output to boolean tensor.|9| |Log|Input tensor|The natural log of the input tensor computed element-wise|tensor(float16), tensor(float), tensor(double)Constrain input and output types to float tensors.|6| |LogSoftmax|The input tensor that's coerced into a 2D matrix of size (NxD) as described above.|The output values with the same shape as input tensor (the original size without coercion).|tensor(float16), tensor(float), tensor(double)Constrain input and output types to float tensors.|1| |Loop|(3 - ∞) A maximum trip-count for the loop specified at runtime. Optional. Pass empty string to skip.
A boolean termination condition. Optional. Pass empty string to skip.
The initial values of any loop-carried dependencies (values that change across loop iterations) (1 - ∞)|Final N loop carried dependency values then K scan_outputs|tensor(uint8), tensor(uint16), tensor(uint32), tensor(uint64), tensor(int8), tensor(int16), tensor(int32), tensor(int64), tensor(float16), tensor(float), tensor(double), tensor(string), tensor(bool), tensor(complex64), tensor(complex128)All Tensor types
tensor(int64)tensor of int64, which should be a scalar.
tensor(bool)tensor of bool, which should be a scalar.|1| |LpNormalization|Input matrix|Matrix after normalization|tensor(float16), tensor(float), tensor(double)Constrain input and output types to float tensors.|1| |LpPool|Input data tensor from the previous operator; dimensions for image case are (N x C x H x W), where N is the batch size, C is the number of channels, and H and W are the height and the width of the data. For non image case, the dimensions are in the form of (N x C x D1 x D2 ... Dn), where N is the batch size.|Output data tensor from Lp pooling across the input tensor. Dimensions will vary based on various kernel, stride, and pad sizes.|tensor(float16), tensor(float), tensor(double)Constrain input and output types to float tensors.|2| |MatMul|N-dimensional matrix A
N-dimensional matrix B|Matrix multiply results from A * B|tensor(float16), tensor(float), tensor(double), tensor(uint32), tensor(uint64), tensor(int32), tensor(int64)Constrain input and output types to float/int tensors.|9| |MatMulInteger|(2 - 4) N-dimensional matrix A
N-dimensional matrix B
Zero point tensor for input 'A'. It's optional and default value is 0. It could be a scalar or a 1-D tensor, which means a per-tensor or per-row quantization. If it's a 1-D tensor, its number of elements should be equal to the number of rows of input 'A'.
Scale tensor for input 'B'. It's optional and default value is 0. It could be a scalar or a 1-D tensor, which means a per-tensor or per-column quantization. If it's a 1-D tensor, its number of elements should be equal to the number of columns of input 'B'.|Matrix multiply results from A * B|tensor(int8), tensor(uint8)Constrain input A data type to 8-bit integer tensor.
tensor(int8), tensor(uint8)Constrain input B data type to 8-bit integer tensor.
tensor(int32)Constrain output Y data type as 32-bit integer tensor.|10| |Max|(1 - ∞) List of tensors for max.|Output tensor.|tensor(float16), tensor(float), tensor(double)Constrain input and output types to float tensors.|8| |MaxPool|Input data tensor from the previous operator; dimensions for image case are (N x C x H x W), where N is the batch size, C is the number of channels, and H and W are the height and the width of the data. For non image case, the dimensions are in the form of (N x C x D1 x D2 ... Dn), where N is the batch size. Optionally, if dimension denotation is in effect, the operation expects the input data tensor to arrive with the dimension denotation of [DATA_BATCH, DATA_CHANNEL, DATA_FEATURE, DATA_FEATURE ...]. (1 - 2)|Output data tensor from average or max pooling across the input tensor. Dimensions will vary based on various kernel, stride, and pad sizes. Floor value of the dimension is used
Indices tensor from max pooling across the input tensor. The dimensions of indices are the same as output tensor. The values in indices of are the indices of the selected values during pooling. The indices are computed as flatten 1-D tensor, and the indices do not consider padding. So the values in indices are in [0, N x C x D1 x ... x Dn).|tensor(float16), tensor(float), tensor(double)Constrain input and output types to float tensors.
tensor(int64)Constrain index tensor to int64|10| |MaxRoiPool|Input data tensor from the previous operator; dimensions for image case are (N x C x H x W), where N is the batch size, C is the number of channels, and H and W are the height and the width of the data.
RoIs (Regions of Interest) to pool over. Should be a 2-D tensor of shape (num_rois, 5) given as [[batch_id, x1, y1, x2, y2], ...].|RoI pooled output 4-D tensor of shape (num_rois, channels, pooled_shape[0], pooled_shape[1]).|tensor(float16), tensor(float), tensor(double)Constrain input and output types to float tensors.|1| |MaxUnpool|(2 - 3) Input data tensor that has to be unpooled. This tensor is typically the first output of the MaxPool op.Dimensions for image case are (N x C x H x W), where N is the batch size, C is the number of channels, and H and W are the height and the width of the data. For non-image case, the dimensions are in the form of (N x C x D1 x D2 ... Dn), where N is the batch size. Optionally, if dimension denotation is in effect, the operation expects the input data tensor to arrive with the dimension denotation of [DATA_BATCH, DATA_CHANNEL, DATA_FEATURE, DATA_FEATURE ...].
Input data tensor containing the indices corresponding to elements in the first input tensor X.This tensor is typically the second output of the MaxPool op.Dimensions must be the same as input tensor X. The indices are linear, i.e. computed considering the tensor as flattened 1-D tensor, assuming row-major storage. Also, the linear indices should not consider padding. So the values in indices are in the range [0, N x C x D1 x ... x Dn).
The shape of the output can be explicitly set which will cause pads values to be auto generated. If 'output_shape' is specified, 'pads' values are ignored.|Output data tensor that contains the result of the unpooling.|tensor(float16), tensor(float), tensor(double)Constrain input and output types to float tensors.
tensor(int64)Constrain index tensor to int64|9| |Mean|(1 - ∞) List of tensors for mean.|Output tensor.|tensor(float16), tensor(float), tensor(double)Constrain input and output types to float tensors.|8| |MeanVarianceNormalization|Input tensor|Output tensor|tensor(float16), tensor(float), tensor(double)Constrain input and output types to all numeric tensors.|9| |Min|(1 - ∞) List of tensors for min.|Output tensor.|tensor(float16), tensor(float), tensor(double)Constrain input and output types to float tensors.|8| |Mod|Dividend tensor
Divisor tensor|Remainder tensor|tensor(uint8), tensor(uint16), tensor(uint32), tensor(uint64), tensor(int8), tensor(int16), tensor(int32), tensor(int64), tensor(float16), tensor(float), tensor(double)Constrain input and output types to high-precision numeric tensors.|10| |Mul|First operand.
Second operand.|Result, has same element type as two inputs|tensor(uint32), tensor(uint64), tensor(int32), tensor(int64), tensor(float16), tensor(float), tensor(double)Constrain input and output types to high-precision numeric tensors.|7| |Multinomial|Input tensor with shape [batch_size, class_size], where class_size is the number of all possible outcomes. Each value along the axis zero represents the unnormalized log-probability of each corresponding outcome in a batch.|Output tensor with shape [batch_size, sample_size], where sample_size is the number of times to sample. Each value along the axis zero represents the outcome of the corresponding sample in a batch.|tensor(float16), tensor(float), tensor(double)Constrain input types to float tensors.
tensor(int32), tensor(int64)Constrain output types to integral tensors.|7| |Neg|Input tensor|Output tensor|tensor(float), tensor(int32), tensor(int8), tensor(int16), tensor(int64), tensor(float16), tensor(double)Constrain input and output types to signed numeric tensors.|6| |NonMaxSuppression|(2 - 5) An input tensor with shape [num_batches, spatial_dimension, 4]. The single box data format is indicated by center_point_box.
An input tensor with shape [num_batches, num_classes, spatial_dimension]
Integer representing the maximum number of boxes to be selected per batch per class. It is a scalar.
Float representing the threshold for deciding whether boxes overlap too much with respect to IOU. It is scalar. Value range [0, 1].
Float representing the threshold for deciding when to remove boxes based on score. It is a scalar|selected indices from the boxes tensor. [num_selected_indices, 3], the selected index format is [batch_index, class_index, box_index].||10| |NonZero|input|output (always 2D tensor)|tensor(uint8), tensor(uint16), tensor(uint32), tensor(uint64), tensor(int8), tensor(int16), tensor(int32), tensor(int64), tensor(float16), tensor(float), tensor(double), tensor(string), tensor(bool), tensor(complex64), tensor(complex128)Constrain to all tensor types.|9| |Not|Input tensor|Output tensor|tensor(bool)Constrains input/output to boolean tensors.|1| |OneHot|Input tensor containing indices. The values must be non-negative integers. Any entries in the 'indices' input tensor with values outside the range [0, depth) will result in one-hot representation with all 'off_value' values in the output tensor.In case 'indices' is of non-integer type, the values will be casted to int64 before use.
Scalar specifying the number of classes in one-hot tensor. This is also the size of the one-hot dimension (specified by 'axis' attribute) added on in the output tensor and the values in the 'indices' input tensor are expected to be in the range [0, depth). TheIn case 'depth' is of non-integer type, it will be casted to int64 before use.
Rank 1 tensor containing exactly two elements, in the format [off_value, on_value], where 'on_value' is the value used for filling locations specified in 'indices' input tensor, and 'off_value' is the value used for filling locations other than those specified in 'indices' input tensor. |Tensor of rank one greater than input tensor 'indices', i.e. rank(output) = rank(indices) + 1. The data type for the elements of the output tensor is the same as the type of input 'values' is used.|tensor(uint8), tensor(uint16), tensor(uint32), tensor(uint64), tensor(int8), tensor(int16), tensor(int32), tensor(int64), tensor(float16), tensor(float), tensor(double)Constrains input to only numeric types.
tensor(uint8), tensor(uint16), tensor(uint32), tensor(uint64), tensor(int8), tensor(int16), tensor(int32), tensor(int64), tensor(float16), tensor(float), tensor(double)Constrains input to only numeric types.
tensor(uint8), tensor(uint16), tensor(uint32), tensor(uint64), tensor(int8), tensor(int16), tensor(int32), tensor(int64), tensor(float16), tensor(float), tensor(double), tensor(string), tensor(bool), tensor(complex64), tensor(complex128)Constrain to any tensor type.|9| |Or|First input operand for the logical operator.
Second input operand for the logical operator.|Result tensor.|tensor(bool)Constrains input to boolean tensor.
tensor(bool)Constrains output to boolean tensor.|7| |PRelu|Input tensor
Slope tensor. The shape of slope can be smaller then first input X; if so, its shape must be unidirectional broadcastable to X|Output tensor (same size as X)|tensor(float16), tensor(float), tensor(double), tensor(uint32), tensor(uint64), tensor(int32), tensor(int64)Constrain input and output types to float/int tensors.|9| |Pad|Input tensor.|Tensor after padding.|tensor(float16), tensor(float), tensor(double)Constrain input and output types to float tensors.|2| |Pow|First operand, base of the exponent.
Second operand, power of the exponent.|Output tensor (same size as X)|tensor(float16), tensor(float), tensor(double)Constrain input and output types to float tensors.|7| |QLinearConv|(8 - 9) Input data tensor from previous layer; has size (N x C x H x W), where N is the batch size, C is the number of channels, and H and W are the height and width. Note that this is for the 2D image. Otherwise the size is (N x C x D1 x D2 ... x Dn). Optionally, if dimension denotation is in effect, the operation expects input data tensor to arrive with the dimension denotation of [DATA_BATCH, DATA_CHANNEL, DATA_FEATURE, DATA_FEATURE ...].
Scale tensor for input 'x'. It's a scalar, which means a per-tensor/layer quantization.
Zero point tensor for input 'x'. It's a scalar, which means a per-tensor/layer quantization.
The weight tensor that will be used in the convolutions; has size (M x C/group x kH x kW), where C is the number of channels, and kH and kW are the height and width of the kernel, and M is the number of feature maps. For more than 2 dimensions, the kernel shape will be (M x C/group x k1 x k2 x ... x kn), where (k1 x k2 x ... kn) is the dimension of the kernel. Optionally, if dimension denotation is in effect, the operation expects the weight tensor to arrive with the dimension denotation of [FILTER_OUT_CHANNEL, FILTER_IN_CHANNEL, FILTER_SPATIAL, FILTER_SPATIAL ...]. X.shape[1] == (W.shape[1] * group) == C (assuming zero based indices for the shape array). Or in other words FILTER_IN_CHANNEL should be equal to DATA_CHANNEL.
Scale tensor for input 'w'. It could be a scalar or a 1-D tensor, which means a per-tensor/layer or per output channel quantization. If it's a 1-D tensor, its number of elements should be equal to the number of output channels (M).
Scale tensor for input 'w'. It could be a scalar or a 1-D tensor, which means a per-tensor/layer or per output channel quantization. If it's a 1-D tensor, its number of elements should be equal to the number of output channels (M).
Scale tensor for output 'y'. It's a scalar, which means a per-tensor/layer quantization.
Scale tensor for output 'y'. It's a scalar, which means a per-tensor/layer quantization.
Optional 1D bias to be added to the convolution, has size of M.|Output data tensor that contains the result of the convolution. The output dimensions are functions of the kernel size, stride size, and pad lengths.|tensor(int8), tensor(uint8)Constrain input type to 8-bit integer tensor.
tensor(int8), tensor(uint8)Constrain filter type to 8-bit integer tensor.
tensor(int8), tensor(uint8)Constrain output type to 8-bit integer tensor.
tensor(int32)Constrain bias type to 32-bit integer tensor.|10| |QLinearMatMul|N-dimensional quantized matrix a
scale of quantized input a
zero point of quantized input a
N-dimensional quantized matrix b
scale of quantized input b
zero point of quantized input b
scale of quantized output y
zero point of quantized output y|Quantized matrix multiply results from a * b|tensor(int8), tensor(uint8)Constrain input a and its zero point data type to 8-bit integer tensor.
tensor(int8), tensor(uint8)Constrain input b and its zero point data type to 8-bit integer tensor.
tensor(int8), tensor(uint8)Constrain output y and its zero point data type to 8-bit integer tensor.|10| |QuantizeLinear|(2 - 3) N-D full precision Input tensor to be quantized.
Scale for doing quantization to get 'y'. It's a scalar, which means a per-tensor/layer quantization.
Zero point for doing quantization to get 'y'. It's a scalar, which means a per-tensor/layer quantization. Default value is 0 if it's not specified.|N-D quantized output tensor. It has same shape as input 'x'.|tensor(float), tensor(int32)Constrain 'x' to float or int32 tensor.
tensor(int8), tensor(uint8)Constrain 'y_zero_point' and 'y' to 8-bit integer tensor.|10| |RNN|(3 - 6) The input sequences packed (and potentially padded) into one 3-D tensor with the shape of [seq_length, batch_size, input_size].
The weight tensor for input gate. Concatenation of Wi and WBi (if bidirectional). The tensor has shape [num_directions, hidden_size, input_size].
The recurrence weight tensor. Concatenation of Ri and RBi (if bidirectional). The tensor has shape [num_directions, hidden_size, hidden_size].
The bias tensor for input gate. Concatenation of [Wbi, Rbi] and [WBbi, RBbi] (if bidirectional). The tensor has shape [num_directions, 2*hidden_size]. Optional: If not specified - assumed to be 0.
Optional tensor specifying lengths of the sequences in a batch. If not specified - assumed all sequences in the batch to have length seq_length. It has shape [batch_size].
Optional initial value of the hidden. If not specified - assumed to be 0. It has shape [num_directions, batch_size, hidden_size]. (0 - 2)|A tensor that concats all the intermediate output values of the hidden. It has shape [seq_length, num_directions, batch_size, hidden_size].
The last output value of the hidden. It has shape [num_directions, batch_size, hidden_size].|tensor(float16), tensor(float), tensor(double)Constrain input and output types to float tensors.
tensor(int32)Constrain seq_lens to integer tensor.|7| |RandomNormal||Output tensor of random values drawn from normal distribution|tensor(float16), tensor(float), tensor(double)Constrain output types to float tensors.|1| |RandomNormalLike|Input tensor to copy shape and optionally type information from.|Output tensor of random values drawn from normal distribution|tensor(uint8), tensor(uint16), tensor(uint32), tensor(uint64), tensor(int8), tensor(int16), tensor(int32), tensor(int64), tensor(float16), tensor(float), tensor(double), tensor(string), tensor(bool), tensor(complex64), tensor(complex128)Constrain to any tensor type. If the dtype attribute is not provided this must be a valid output type.
tensor(float16), tensor(float), tensor(double)Constrain output types to float tensors.|1| |RandomUniform||Output tensor of random values drawn from uniform distribution|tensor(float16), tensor(float), tensor(double)Constrain output types to float tensors.|1| |RandomUniformLike|Input tensor to copy shape and optionally type information from.|Output tensor of random values drawn from uniform distribution|tensor(uint8), tensor(uint16), tensor(uint32), tensor(uint64), tensor(int8), tensor(int16), tensor(int32), tensor(int64), tensor(float16), tensor(float), tensor(double), tensor(string), tensor(bool), tensor(complex64), tensor(complex128)Constrain to any tensor type. If the dtype attribute is not provided this must be a valid output type.
tensor(float16), tensor(float), tensor(double)Constrain output types to float tensors.|1| |Reciprocal|Input tensor|Output tensor|tensor(float16), tensor(float), tensor(double)Constrain input and output types to float tensors.|6| |ReduceL1|An input tensor.|Reduced output tensor.|tensor(uint32), tensor(uint64), tensor(int32), tensor(int64), tensor(float16), tensor(float), tensor(double)Constrain input and output types to high-precision numeric tensors.|1| |ReduceL2|An input tensor.|Reduced output tensor.|tensor(uint32), tensor(uint64), tensor(int32), tensor(int64), tensor(float16), tensor(float), tensor(double)Constrain input and output types to high-precision numeric tensors.|1| |ReduceLogSum|An input tensor.|Reduced output tensor.|tensor(uint32), tensor(uint64), tensor(int32), tensor(int64), tensor(float16), tensor(float), tensor(double)Constrain input and output types to high-precision numeric tensors.|1| |ReduceLogSumExp|An input tensor.|Reduced output tensor.|tensor(uint32), tensor(uint64), tensor(int32), tensor(int64), tensor(float16), tensor(float), tensor(double)Constrain input and output types to high-precision numeric tensors.|1| |ReduceMax|An input tensor.|Reduced output tensor.|tensor(uint32), tensor(uint64), tensor(int32), tensor(int64), tensor(float16), tensor(float), tensor(double)Constrain input and output types to high-precision numeric tensors.|1| |ReduceMean|An input tensor.|Reduced output tensor.|tensor(uint32), tensor(uint64), tensor(int32), tensor(int64), tensor(float16), tensor(float), tensor(double)Constrain input and output types to high-precision numeric tensors.|1| |ReduceMin|An input tensor.|Reduced output tensor.|tensor(uint32), tensor(uint64), tensor(int32), tensor(int64), tensor(float16), tensor(float), tensor(double)Constrain input and output types to high-precision numeric tensors.|1| |ReduceProd|An input tensor.|Reduced output tensor.|tensor(uint32), tensor(uint64), tensor(int32), tensor(int64), tensor(float16), tensor(float), tensor(double)Constrain input and output types to high-precision numeric tensors.|1| |ReduceSum|An input tensor.|Reduced output tensor.|tensor(uint32), tensor(uint64), tensor(int32), tensor(int64), tensor(float16), tensor(float), tensor(double)Constrain input and output types to high-precision numeric tensors.|1| |ReduceSumSquare|An input tensor.|Reduced output tensor.|tensor(uint32), tensor(uint64), tensor(int32), tensor(int64), tensor(float16), tensor(float), tensor(double)Constrain input and output types to high-precision numeric tensors.|1| |Relu|Input tensor|Output tensor|tensor(float16), tensor(float), tensor(double)Constrain input and output types to float tensors.|6| |Reshape|An input tensor.
Specified shape for output.|Reshaped data.|tensor(uint8), tensor(uint16), tensor(uint32), tensor(uint64), tensor(int8), tensor(int16), tensor(int32), tensor(int64), tensor(float16), tensor(float), tensor(double), tensor(string), tensor(bool), tensor(complex64), tensor(complex128)Constrain input and output types to all tensor types.|5| |Resize|N-D tensor
The scale array along each dimension. It takes value greater than 0. If it's less than 1, it's sampling down, otherwise, it's upsampling. The number of elements of 'scales' should be the same as the rank of input 'X'.|N-D tensor after resizing|tensor(uint8), tensor(uint16), tensor(uint32), tensor(uint64), tensor(int8), tensor(int16), tensor(int32), tensor(int64), tensor(float16), tensor(float), tensor(double), tensor(string), tensor(bool), tensor(complex64), tensor(complex128)Constrain input 'X' and output 'Y' to all tensor types.|10| |ReverseSequence|Tensor of rank r >= 2.
Tensor specifying lengths of the sequences in a batch. It has shape [batch_size].|Tensor with same shape of input.|tensor(uint8), tensor(uint16), tensor(uint32), tensor(uint64), tensor(int8), tensor(int16), tensor(int32), tensor(int64), tensor(float16), tensor(float), tensor(double), tensor(string), tensor(bool), tensor(complex64), tensor(complex128)Input and output types can be of any tensor type.|10| |RoiAlign|Input data tensor from the previous operator; 4-D feature map of shape (N, C, H, W), where N is the batch size, C is the number of channels, and H and W are the height and the width of the data.
RoIs (Regions of Interest) to pool over; rois is 2-D input of shape (num_rois, 4) given as [[x1, y1, x2, y2], ...]. The RoIs' coordinates are in the coordinate system of the input image. Each coordinate set has a 1:1 correspondence with the 'batch_indices' input.
1-D tensor of shape (num_rois,) with each element denoting the index of the corresponding image in the batch.|RoI pooled output, 4-D tensor of shape (num_rois, C, output_height, output_width). The r-th batch element Y[r-1] is a pooled feature map corresponding to the r-th RoI X[r-1].|tensor(float16), tensor(float), tensor(double)Constrain types to float tensors.
tensor(int64)Constrain types to int tensors.|10| |Scan|(1 - ∞) Initial values of the loop's N state variables followed by M scan_inputs (1 - ∞)|Final values of the loop's N state variables followed by K scan_outputs|tensor(int64)Int64 tensor
tensor(uint8), tensor(uint16), tensor(uint32), tensor(uint64), tensor(int8), tensor(int16), tensor(int32), tensor(int64), tensor(float16), tensor(float), tensor(double), tensor(string), tensor(bool), tensor(complex64), tensor(complex128)All Tensor types|9| |Scatter|Tensor of rank r >= 1.
Tensor of int32/int64 indices, of r >= 1 (same rank as input).
Tensor of rank r >=1 (same rank and shape as indices)|Tensor of rank r >= 1 (same rank as input).|tensor(uint8), tensor(uint16), tensor(uint32), tensor(uint64), tensor(int8), tensor(int16), tensor(int32), tensor(int64), tensor(float16), tensor(float), tensor(double), tensor(string), tensor(bool), tensor(complex64), tensor(complex128)Input and output types can be of any tensor type.
tensor(int32), tensor(int64)Constrain indices to integer types|9| |Selu|Input tensor|Output tensor|tensor(float16), tensor(float), tensor(double)Constrain input and output types to float tensors.|6| |Shape|An input tensor.|Shape of the input tensor|tensor(uint8), tensor(uint16), tensor(uint32), tensor(uint64), tensor(int8), tensor(int16), tensor(int32), tensor(int64), tensor(float16), tensor(float), tensor(double), tensor(string), tensor(bool), tensor(complex64), tensor(complex128)Input tensor can be of arbitrary type.
tensor(int64)Constrain output to int64 tensor.|1| |Shrink|The input data as Tensor.|The output.|tensor(uint8), tensor(uint16), tensor(uint32), tensor(uint64), tensor(int8), tensor(int16), tensor(int32), tensor(int64), tensor(float16), tensor(float), tensor(double)Constrains input to only numeric types.|9| |Sigmoid|Input tensor|Output tensor|tensor(float16), tensor(float), tensor(double)Constrain input and output types to float tensors.|6| |Sign|Input tensor|The sign of the input tensor computed element-wise. It has the same shape and type of the input.|tensor(uint8), tensor(uint16), tensor(uint32), tensor(uint64), tensor(int8), tensor(int16), tensor(int32), tensor(int64), tensor(float16), tensor(float), tensor(double)Constrain input and output types to all numeric tensors.|9| |Sin|Input tensor|The sine of the input tensor computed element-wise|tensor(float16), tensor(float), tensor(double)Constrain input and output types to float tensors.|7| |Sinh|Input tensor|The hyperbolic sine values of the input tensor computed element-wise|tensor(float16), tensor(float), tensor(double)Constrain input and output types to float tensors.|9| |Size|An input tensor.|Total number of elements of the input tensor|tensor(uint8), tensor(uint16), tensor(uint32), tensor(uint64), tensor(int8), tensor(int16), tensor(int32), tensor(int64), tensor(float16), tensor(float), tensor(double), tensor(string), tensor(bool), tensor(complex64), tensor(complex128)Input tensor can be of arbitrary type.
tensor(int64)Constrain output to int64 tensor, which should be a scalar though.|1| |Slice|(3 - 5) Tensor of data to extract slices from.
1-D tensor of starting indices of corresponding axis in axes
1-D tensor of ending indices (exclusive) of corresponding axis in axes
1-D tensor of axes that starts and ends apply to.
1-D tensor of slice step of corresponding axis in axes. Default to 1. |Sliced data tensor.|tensor(uint8), tensor(uint16), tensor(uint32), tensor(uint64), tensor(int8), tensor(int16), tensor(int32), tensor(int64), tensor(float16), tensor(float), tensor(double), tensor(string), tensor(bool), tensor(complex64), tensor(complex128)Constrain input and output types to all tensor types.
tensor(int32), tensor(int64)Constrain indices to integer types|10| |Softmax|The input tensor that's coerced into a 2D matrix of size (NxD) as described above.|The output values with the same shape as input tensor (the original size without coercion).|tensor(float16), tensor(float), tensor(double)Constrain input and output types to float tensors.|1| |Softplus|1D input tensor|1D input tensor|tensor(float16), tensor(float), tensor(double)Constrain input and output types to float tensors.|1| |Softsign|Input tensor|The softsign (x/(1+|x|)) values of the input tensor computed element-wise|tensor(float16), tensor(float), tensor(double)Constrain input and output types to float tensors.|1| |SpaceToDepth|Input tensor of [N,C,H,W], where N is the batch axis, C is the channel or depth, H is the height and W is the width.|Output tensor of [N, C * blocksize * blocksize, H/blocksize, W/blocksize].|tensor(uint8), tensor(uint16), tensor(uint32), tensor(uint64), tensor(int8), tensor(int16), tensor(int32), tensor(int64), tensor(float16), tensor(float), tensor(double), tensor(string), tensor(bool), tensor(complex64), tensor(complex128)Constrain input and output types to all tensor types.|1| |Split|The tensor to split (1 - ∞)|One or more outputs forming list of tensors after splitting|tensor(uint8), tensor(uint16), tensor(uint32), tensor(uint64), tensor(int8), tensor(int16), tensor(int32), tensor(int64), tensor(float16), tensor(float), tensor(double), tensor(string), tensor(bool), tensor(complex64), tensor(complex128)Constrain input and output types to all tensor types.|2| |Sqrt|Input tensor|Output tensor|tensor(float16), tensor(float), tensor(double)Constrain input and output types to float tensors.|6| |Squeeze|Tensors with at least max(dims) dimensions.|Reshaped tensor with same data as input.|tensor(uint8), tensor(uint16), tensor(uint32), tensor(uint64), tensor(int8), tensor(int16), tensor(int32), tensor(int64), tensor(float16), tensor(float), tensor(double), tensor(string), tensor(bool), tensor(complex64), tensor(complex128)Constrain input and output types to all tensor types.|1| |StringNormalizer|UTF-8 strings to normalize|UTF-8 Normalized strings||10| |Sub|First operand.
Second operand.|Result, has same element type as two inputs|tensor(uint32), tensor(uint64), tensor(int32), tensor(int64), tensor(float16), tensor(float), tensor(double)Constrain input and output types to high-precision numeric tensors.|7| |Sum|(1 - ∞) List of tensors for sum.|Output tensor.|tensor(float16), tensor(float), tensor(double)Constrain input and output types to float tensors.|8| |Tan|Input tensor|The tangent of the input tensor computed element-wise|tensor(float16), tensor(float), tensor(double)Constrain input and output types to float tensors.|7| |Tanh|Input tensor|The hyperbolic tangent values of the input tensor computed element-wise|tensor(float16), tensor(float), tensor(double)Constrain input and output types to float tensors.|6| |TfIdfVectorizer|Input for n-gram extraction|Ngram results|tensor(string), tensor(int32), tensor(int64)Input is ether string UTF-8 or int32/int64
tensor(float)1-D tensor of floats|9| |ThresholdedRelu|Input tensor|Output tensor|tensor(float16), tensor(float), tensor(double)Constrain input and output types to float tensors.|10| |Tile|Input tensor of any shape.
1D int64 tensor of the same length as input's dimension number, includes numbers of repeated copies along input's dimensions.|Output tensor of the same dimension and type as tensor input. output_dim[i] = input_dim[i] * repeats[i]|tensor(uint8), tensor(uint16), tensor(uint32), tensor(uint64), tensor(int8), tensor(int16), tensor(int32), tensor(int64), tensor(float16), tensor(float), tensor(double), tensor(string), tensor(bool), tensor(complex64), tensor(complex128)Constrain input and output types to all tensor types.
tensor(int64)Constrain repeat's type to int64 tensors.|6| |TopK|Tensor of shape [a_1, a_2, ..., a_n, r]
A 1-D tensor containing a single positive value corresponding to the number of top elements to retrieve|Tensor of shape [a_1, a_2, ..., a_{axis-1}, k, a_{axis+1}, ... a_n] containing top K values from the input tensor
Tensor of shape [a_1, a_2, ..., a_{axis-1}, k, a_{axis+1}, ... a_n] containing the corresponding input tensor indices for the top K values.|tensor(float16), tensor(float), tensor(double)Constrain input and output types to float tensors.
tensor(int64)Constrain index tensor to int64|10| |Transpose|An input tensor.|Transposed output.|tensor(uint8), tensor(uint16), tensor(uint32), tensor(uint64), tensor(int8), tensor(int16), tensor(int32), tensor(int64), tensor(float16), tensor(float), tensor(double), tensor(string), tensor(bool), tensor(complex64), tensor(complex128)Constrain input and output types to all tensor types.|1| |Unsqueeze|Original tensor|Reshaped tensor with same data as input.|tensor(uint8), tensor(uint16), tensor(uint32), tensor(uint64), tensor(int8), tensor(int16), tensor(int32), tensor(int64), tensor(float16), tensor(float), tensor(double), tensor(string), tensor(bool), tensor(complex64), tensor(complex128)Constrain input and output types to all tensor types.|1| |Upsample|N-D tensor
The scale array along each dimension. It takes value greater than or equal to 1. The number of elements of 'scales' should be the same as the rank of input 'X'.|N-D tensor after resizing|tensor(uint8), tensor(uint16), tensor(uint32), tensor(uint64), tensor(int8), tensor(int16), tensor(int32), tensor(int64), tensor(float16), tensor(float), tensor(double), tensor(string), tensor(bool), tensor(complex64), tensor(complex128)Constrain input 'X' and output 'Y' to all tensor types.|10| |Where|When True (nonzero), yield X, otherwise yield Y
values selected at indices where condition is True
values selected at indices where condition is False|Tensor of shape equal to the broadcasted shape of condition, X, and Y.|tensor(bool)Constrain to boolean tensors.
tensor(uint8), tensor(uint16), tensor(uint32), tensor(uint64), tensor(int8), tensor(int16), tensor(int32), tensor(int64), tensor(float16), tensor(float), tensor(double), tensor(string), tensor(bool), tensor(complex64), tensor(complex128)Constrain input and output types to all tensor types.|9| |Xor|First input operand for the logical operator.
Second input operand for the logical operator.|Result tensor.|tensor(bool)Constrains input to boolean tensor.
tensor(bool)Constrains output to boolean tensor.|7|