-
Notifications
You must be signed in to change notification settings - Fork 1.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
onnx推理结果和mnn推理结果不一致,使用正确性校验出现TESTERROR #2264
Comments
commit dcb080c |
onnx 模型可以发一下? |
testmodel.zip |
模型已上传。 |
ok ,复现了,定位中 |
CPURaster singleConvert 优化的 bug ,修复中 |
2.4.2 修正 |
辛苦了,非常感谢 |
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
平台(如果交叉编译请再附上交叉编译目标平台): PCx86
Platform(Include target platform as well if cross-compiling): ubuntu-20.04
Github版本:2.4.0
Github Version:
直接下载ZIP包请提供下载日期以及压缩包注释里的git版本(可通过
7z l zip包路径
命令并在输出信息中搜索Comment
获得,形如Comment = bc80b11110cd440aacdabbf59658d630527a7f2b
)。 git clone请提供git commit
第一行的commit idProvide date (or better yet, git revision from the comment section of the zip. Obtainable using
7z l PATH/TO/ZIP
and search forComment
in the output) if downloading source as zip,otherwise provide the first commit id from the output ofgit commit
编译方式: cmake
Compiling Method
mkdir build; cd build; cmake ..; make -j 4
编译日志:
Build Log:
python ../tools/script/testMNNFromOnnx.py ../hhquan_models/vadmodel2.onnx
onnx/test.onnx
tensor(float)
tensor(float)
tensor(float)
tensor(float)
tensor(float)
tensor(float)
tensor(float)
tensor(float)
['yn', 'cnn1_c', 'cnn2_c', 'cnn3_c', 'ht']
inputs:
xn
onnx/
cnn1_a
onnx/
cnn2_a
onnx/
cnn3_a
onnx/
cnn1_b
onnx/
cnn2_b
onnx/
cnn3_b
onnx/
h
onnx/
outputs:
onnx/yn.txt (1, 1, 1)
onnx/
onnx/cnn1_c.txt (1, 1, 40, 16)
onnx/
onnx/cnn2_c.txt (1, 1, 20, 16)
onnx/
onnx/cnn3_c.txt (1, 1, 20, 32)
onnx/
onnx/ht.txt (2, 1, 32)
onnx/
The device support dot:0, support fp16:0, support i8mm: 0
Start to Convert Other Model Format To MNN Model...
[14:00:09] /home/hhquan/Projects/MNN_test/MNN-2.4.0/tools/converter/source/onnx/onnxConverter.cpp:40: ONNX Model ir version: 7
[14:00:09] /home/hhquan/Projects/MNN_test/MNN-2.4.0/tools/converter/source/onnx/onnxConverter.cpp:108: Check it out ==> /rnn/GRU_output_0 has empty input, the index is 4
[14:00:09] /home/hhquan/Projects/MNN_test/MNN-2.4.0/tools/converter/source/onnx/onnxConverter.cpp:108: Check it out ==> /rnn/GRU_1_output_0 has empty input, the index is 4
Start to Optimize the MNN Net...
[14:00:09] /home/hhquan/Projects/MNN_test/MNN-2.4.0/tools/converter/source/optimizer/onnxextra/OnnxSequenceGRUMerge.cpp:68: OnnxSequenceGRUMerge: W shape:{1, 96, 320}; R shape: {1, 96, 32}, inputs num:6, outputs num:2
[14:00:09] /home/hhquan/Projects/MNN_test/MNN-2.4.0/tools/converter/source/optimizer/onnxextra/OnnxSequenceGRUMerge.cpp:68: OnnxSequenceGRUMerge: W shape:{1, 96, 32}; R shape: {1, 96, 32}, inputs num:6, outputs num:2
268 op name is empty or dup, set to Unsqueeze268
inputTensors : [ xn, cnn1_a, cnn1_b, cnn2_a, cnn2_b, cnn3_a, cnn3_b, h, ]
outputTensors: [ cnn1_c, cnn2_c, cnn3_c, ht, yn, ]
Converted Success!
Check convert result by onnx, thredhold is 0.01
xn
cnn1_a
cnn2_a
cnn3_a
cnn1_b
cnn2_b
cnn3_b
h
output: yn
output: cnn1_c
output: cnn2_c
output: cnn3_c
output: ht
yn: (1, 1, 1, )
TESTERROR yn value error : absMaxV:0.855153 - DiffMax 0.144592
Error for output yn
cnn1_c: (1, 1, 40, 16, )
cnn2_c: (1, 1, 20, 16, )
TESTERROR cnn2_c value error : absMaxV:3.355157 - DiffMax 3.234371
Error for output cnn2_c
cnn3_c: (1, 1, 20, 32, )
TESTERROR cnn3_c value error : absMaxV:2.566657 - DiffMax 1.898886
Error for output cnn3_c
ht: (2, 1, 32, )
TESTERROR ht value error : absMaxV:1.087269 - DiffMax 2.051899
Error for output ht
Save mnn result to .error director
The text was updated successfully, but these errors were encountered: