Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

量化工具报错input tensor error错误 #353

Closed
haima1998 opened this issue Sep 24, 2019 · 2 comments
Closed

量化工具报错input tensor error错误 #353

haima1998 opened this issue Sep 24, 2019 · 2 comments

Comments

@haima1998
Copy link

我尝试用MNN量化,会报以下错误,请问这个问题大概是什么原因引起的?
MNN/tools/quantization/calibration.cpp:189: Check failed: inputTensorStatistic != _featureInfo.end() ==> input tensor error!

我的配置文件这么来写 preprocessConfig.json
{
"format":"RGB",
"mean":[
127.5,
127.5,
127.5
],
"normal":[
0.00784314,
0.00784314,
0.00784314
],
"width":544,
"height":544,
"path":"/path/images",
"used_image_num":500,
"feature_quantize_method":"KL",
"weight_quantize_method":"MAX_ABS"
}

如果把feature_quantize_method和weight_quantize_method改为ADMM,"input tensor error!" 问题就没有了,但是执行到最好会crash,crash在

#0 __memmove_avx_unaligned_erms () at ../sysdeps/x86_64/multiarch/memmove-vec-unaligned-erms.S:249
249 ../sysdeps/x86_64/multiarch/memmove-vec-unaligned-erms.S: No such file or directory.
[Current thread is 1 (Thread 0x7f981f60b740 (LWP 22608))]
(gdb) bt
#0 __memmove_avx_unaligned_erms () at ../sysdeps/x86_64/multiarch/memmove-vec-unaligned-erms.S:249
#1 0x00007f981f18519a in MNN::CPUReshape::onExecute(std::vector<MNN::Tensor*, std::allocatorMNN::Tensor* > const&, std::vector<MNN::Tensor*, std::allocatorMNN::Tensor* > const&) () from /home/charlie/disk1/code/opensouce/MNN/build/libMNN.so
#2 0x00007f981f109c4e in MNN::Pipeline::executeCallBack(std::function<bool (std::vector<MNN::Tensor*, std::allocatorMNN::Tensor* > const&, MNN::OperatorInfo const*)> const&, std::function<bool (std::vector<MNN::Tensor*, std::allocatorMNN::Tensor* > const&, MNN::OperatorInfo const*)> const&) () from /home/charlie/disk1/code/opensouce/MNN/build/libMNN.so
#3 0x00007f981f116957 in MNN::Session::runWithCallBack(std::function<bool (std::vector<MNN::Tensor*, std::allocatorMNN::Tensor* > const&, MNN::OperatorInfo const*)> const&, std::function<bool (std::vector<MNN::Tensor*, std::allocatorMNN::Tensor* > const&, MNN::OperatorInfo const*)> const&, bool) const () from /home/charlie/disk1/code/opensouce/MNN/build/libMNN.so
#4 0x000055f9fef1004d in Calibration::_computeFeatureScaleADMM() ()
#5 0x000055f9fef163e8 in Calibration::runQuantizeModel() ()
#6 0x000055f9feeeadaa in main ()
(gdb)

@czy2014hust
Copy link
Collaborator

  1. if (_featureQuantizeMethod == "KL") {
    可以先把此if块注释掉
  2. 如果出现这个错误的话,ADMM目前暂不支持

@haima1998
Copy link
Author

  1. if (_featureQuantizeMethod == "KL") {

    可以先把此if块注释掉
  2. 如果出现这个错误的话,ADMM目前暂不支持

多谢了! 注释掉if块后问题解决,MNN模型量化完成。
但是量化后的推理速度并未有明显提升,只是模型大小减小明显。

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants