We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
我尝试用MNN量化,会报以下错误,请问这个问题大概是什么原因引起的? MNN/tools/quantization/calibration.cpp:189: Check failed: inputTensorStatistic != _featureInfo.end() ==> input tensor error!
我的配置文件这么来写 preprocessConfig.json { "format":"RGB", "mean":[ 127.5, 127.5, 127.5 ], "normal":[ 0.00784314, 0.00784314, 0.00784314 ], "width":544, "height":544, "path":"/path/images", "used_image_num":500, "feature_quantize_method":"KL", "weight_quantize_method":"MAX_ABS" }
如果把feature_quantize_method和weight_quantize_method改为ADMM,"input tensor error!" 问题就没有了,但是执行到最好会crash,crash在
#0 __memmove_avx_unaligned_erms () at ../sysdeps/x86_64/multiarch/memmove-vec-unaligned-erms.S:249 249 ../sysdeps/x86_64/multiarch/memmove-vec-unaligned-erms.S: No such file or directory. [Current thread is 1 (Thread 0x7f981f60b740 (LWP 22608))] (gdb) bt #0 __memmove_avx_unaligned_erms () at ../sysdeps/x86_64/multiarch/memmove-vec-unaligned-erms.S:249 #1 0x00007f981f18519a in MNN::CPUReshape::onExecute(std::vector<MNN::Tensor*, std::allocatorMNN::Tensor* > const&, std::vector<MNN::Tensor*, std::allocatorMNN::Tensor* > const&) () from /home/charlie/disk1/code/opensouce/MNN/build/libMNN.so #2 0x00007f981f109c4e in MNN::Pipeline::executeCallBack(std::function<bool (std::vector<MNN::Tensor*, std::allocatorMNN::Tensor* > const&, MNN::OperatorInfo const*)> const&, std::function<bool (std::vector<MNN::Tensor*, std::allocatorMNN::Tensor* > const&, MNN::OperatorInfo const*)> const&) () from /home/charlie/disk1/code/opensouce/MNN/build/libMNN.so #3 0x00007f981f116957 in MNN::Session::runWithCallBack(std::function<bool (std::vector<MNN::Tensor*, std::allocatorMNN::Tensor* > const&, MNN::OperatorInfo const*)> const&, std::function<bool (std::vector<MNN::Tensor*, std::allocatorMNN::Tensor* > const&, MNN::OperatorInfo const*)> const&, bool) const () from /home/charlie/disk1/code/opensouce/MNN/build/libMNN.so #4 0x000055f9fef1004d in Calibration::_computeFeatureScaleADMM() () #5 0x000055f9fef163e8 in Calibration::runQuantizeModel() () #6 0x000055f9feeeadaa in main () (gdb)
The text was updated successfully, but these errors were encountered:
MNN/tools/quantization/calibration.cpp
Line 181 in cfad1c3
Sorry, something went wrong.
MNN/tools/quantization/calibration.cpp Line 181 in cfad1c3 if (_featureQuantizeMethod == "KL") { 可以先把此if块注释掉 如果出现这个错误的话,ADMM目前暂不支持
多谢了! 注释掉if块后问题解决,MNN模型量化完成。 但是量化后的推理速度并未有明显提升,只是模型大小减小明显。
No branches or pull requests
我尝试用MNN量化,会报以下错误,请问这个问题大概是什么原因引起的?
MNN/tools/quantization/calibration.cpp:189: Check failed: inputTensorStatistic != _featureInfo.end() ==> input tensor error!
我的配置文件这么来写 preprocessConfig.json
{
"format":"RGB",
"mean":[
127.5,
127.5,
127.5
],
"normal":[
0.00784314,
0.00784314,
0.00784314
],
"width":544,
"height":544,
"path":"/path/images",
"used_image_num":500,
"feature_quantize_method":"KL",
"weight_quantize_method":"MAX_ABS"
}
如果把feature_quantize_method和weight_quantize_method改为ADMM,"input tensor error!" 问题就没有了,但是执行到最好会crash,crash在
#0 __memmove_avx_unaligned_erms () at ../sysdeps/x86_64/multiarch/memmove-vec-unaligned-erms.S:249
249 ../sysdeps/x86_64/multiarch/memmove-vec-unaligned-erms.S: No such file or directory.
[Current thread is 1 (Thread 0x7f981f60b740 (LWP 22608))]
(gdb) bt
#0 __memmove_avx_unaligned_erms () at ../sysdeps/x86_64/multiarch/memmove-vec-unaligned-erms.S:249
#1 0x00007f981f18519a in MNN::CPUReshape::onExecute(std::vector<MNN::Tensor*, std::allocatorMNN::Tensor* > const&, std::vector<MNN::Tensor*, std::allocatorMNN::Tensor* > const&) () from /home/charlie/disk1/code/opensouce/MNN/build/libMNN.so
#2 0x00007f981f109c4e in MNN::Pipeline::executeCallBack(std::function<bool (std::vector<MNN::Tensor*, std::allocatorMNN::Tensor* > const&, MNN::OperatorInfo const*)> const&, std::function<bool (std::vector<MNN::Tensor*, std::allocatorMNN::Tensor* > const&, MNN::OperatorInfo const*)> const&) () from /home/charlie/disk1/code/opensouce/MNN/build/libMNN.so
#3 0x00007f981f116957 in MNN::Session::runWithCallBack(std::function<bool (std::vector<MNN::Tensor*, std::allocatorMNN::Tensor* > const&, MNN::OperatorInfo const*)> const&, std::function<bool (std::vector<MNN::Tensor*, std::allocatorMNN::Tensor* > const&, MNN::OperatorInfo const*)> const&, bool) const () from /home/charlie/disk1/code/opensouce/MNN/build/libMNN.so
#4 0x000055f9fef1004d in Calibration::_computeFeatureScaleADMM() ()
#5 0x000055f9fef163e8 in Calibration::runQuantizeModel() ()
#6 0x000055f9feeeadaa in main ()
(gdb)
The text was updated successfully, but these errors were encountered: