Skip to content
This repository has been archived by the owner on Aug 30, 2018. It is now read-only.

Latest commit

 

History

History
113 lines (111 loc) · 4.18 KB

ONNXOpCoverage.md

File metadata and controls

113 lines (111 loc) · 4.18 KB

Tracking why operators are not covered

ONNX backend test script reports the coverage on the operators and attributes. But we have various of reasons for the missing test coverage on operators. This doc keeps tracking why operators are not covered by the testcases.

  • 💚 The ONNX operator can map to a Caffe2 operator.
  • 💛 The solution is not perfect/finished, for example, the operator can map to a combination of Caffe2 operators.
  • 💔 Hard to find a solution with existing Caffe2 operators.
Operator Test Coverage PyTorch Caffe2
Abs Yes OK 💚OK
Add Yes OK 💚OK
And Support int tensor, but no bool tensor 💚OK
ArgMax 💔No op
ArgMin 💔No op
AveragePool Yes OK 💚OK
BatchNormalization Yes OK 💚OK
Cast 💔No op
Ceil 💔No op
Clip Yes OK 💚OK
Concat Yes OK 💚OK
Constant Yes OK 💛Special handling
Conv Yes OK 💚OK
ConvTranspose 💚OK
DepthToSpace 💛Should be BatchToSpace, no tests
Div Yes OK 💚OK
Dropout Yes OK 💚OK
Elu Yes OK 💚OK
Equal Yes OK 💚OK
Exp Yes OK 💚OK
Flatten Yes OK 💚OK
Floor 💔No op
GRU 💛Under development
Gather Yes OK 💛C2 only support axis=0 or 1
Gemm Yes OK 💛C2 use FC or MatMul + Add
GlobalAveragePool Yes No direct mapping 💚OK
GlobalLpPool 💔No op
GlobalMaxPool 💚OK
Greater 💔Only support int tensor
HardSigmoid 💔No op
Hardmax 💔No op
InstanceNormalization 💚OK
LRN Yes OK 💚OK
LSTM 💛Under development
LeakyRelu Yes OK 💚OK
Less 💔Only support int tensor
Log Yes OK 💚OK
LogSoftmax OK 💛No op, translated in onnx-caffe2
LpNormalization 💚Should be LpNorm, no tests
LpPool 💚Should be LpPool, no tests
MatMul Yes OK 💚OK
Max Yes OK 💚OK
MaxPool Yes OK 💚OK
MaxRoiPool 💔No op
Mean 💔No op
Min Yes OK 💚OK
Mul Yes OK 💚OK
Neg Yes OK 💚OK
Not 💚OK
Or 💚OK
PRelu Yes OK 💚OK
Pad Yes OK 💚OK
Pow OK 💛Under development, C2 only accepts exponent as argument, not an input
RNN 💛Under development
RandomNormal 💔No op
RandomNormalLike 💔No op
RandomUniform 💔No op
RandomUniformLike 💔No op
Reciprocal 💛Use Pow to implement
ReduceL1 💔No op
ReduceL2 💔No op
ReduceLogSum 💔No op
ReduceLogSumExp 💔No op
ReduceMax 💔No op
ReduceMean 💔No op
ReduceMin 💔No op
ReduceProd 💔No op
ReduceSum 💔No op
ReduceSumSquare 💔No op
Relu Yes OK 💚OK
Reshape Yes OK 💚OK
Selu Yes OK 💚OK
Sigmoid Yes OK 💚OK
Slice Yes OK 💔ScatterAssign + Cast, very hacky implementaion, Slice in C2 only supports one dimension
Softmax Yes OK 💔Axis and dim has different semantics
Softplus Yes OK 💚OK
Softsign 💚OK, no tests
SpaceToDepth 💛Should be SpaceToBatch, no tests
Split Yes OK 💚OK
Sqrt 💛Use Pow to implement
Squeeze 💚OK, no tests
Sub OK 💚OK
Sum Yes OK 💚OK
Tanh Yes OK 💚OK
Tile 💚OK, no tests
Transpose Yes OK 💚OK
Xor 💚OK
experimental ATen 💚OK
experimental Affine 💔No op
experimental ConstantFill 💚OK
experimental Crop 💔No op
experimental FC 💚OK
experimental GRUUnit 💚OK, no tests
experimental GivenTensorFill 💚OK
experimental Identity 💚OK
experimental ImageScaler 💔No op
experimental MeanVarianceNormalization 💔No op
experimental ParametricSoftplus 💔No op
experimental Scale 💚OK
experimental ScaledTanh 💔No op
experimental ThresholdedRelu 💔No op
experimental Upsample 💔No bilinear