-
Notifications
You must be signed in to change notification settings - Fork 1.8k
Exception: Mask conflict happenes! #2666
Comments
Hi there, thanks for the feedback! The reason for this error is because NNI does not yet support fixing mask conflicts between BN layers(we may support this in the next release). I suggest you use the |
Thanks for replay, do you have a schedule of your next release? By the way, after using L1FilterPruner, i got the same error. |
Could you please show the config list of the L1FilterPruner? Thanks~ |
|
I run L1FilterPruner on this config_list, and it works fine. Please check if the mask file is generated by L1FilterPruner? I suggest you delete the mask file and re-generate the mask file by L1FilterPruner. If this problem still exists, could you please show me the code(just the snippet code of the pruner and the speedup part is also ok)? Thanks~ |
Actually i integrated nni compression model into mmdet.
` |
Could you please also show the speedup part? By the way, does this problem still exist after you delete and re-generate the mask file? |
Sorry, Something else interrupted. I did not delete mask, I will try to re-generate the mask, and try to speed-up. speed-up code:
` |
After re-generate mask. Got the same error. |
Hi, since the context of mmdet seems very complicated, could you please just run the following code and see if it can finish without an error? import os
import json
import torch
import torch.nn as nn
import torchvision
import torchvision.transforms as transforms
import torchvision.datasets as datasets
from nni.compression.torch.speedup.compressor import ModelSpeedup
from nni.compression.torch import L1FilterPruner
net = torchvision.models.mobilenet_v2(pretrained=True)
net.cuda()
cfg = [{ 'sparsity': 0.7, 'op_types': ['Conv2d'] }]
pruner = L1FilterPruner(net, cfg)
pruner.compress()
pruner.export_model('./mobile_v2.pth', './mobile_v2_mask')
another = torchvision.models.mobilenet_v2()
another.cuda()
state_dict = torch.load('./mobile_v2.pth')
another.load_state_dict(state_dict)
data = torch.rand(1, 3, 224, 224).cuda()
ms = ModelSpeedup(another, data, './mobile_v2_mask')
ms.speedup_model() |
Actually I can run your sample successfully(VGG-slim). def _make_divisible(v, divisor, min_value=None): class ConvBNReLU(nn.Sequential):
class InvertedResidual(nn.Module):
@BACKBONES.register_module
` |
Can you run the code I provided yesterday successfully?
|
I integrate you code into mmdetection, and could run it successfully, and got output without error. |
Thanks for the quick reply~
|
@zheng-ningxin So do you know how to fix my code, or this is a bug? |
I thought you have fixed this problem. So you still cannot speedup the mobilenetv_2? Could you please paste the newest version of your mobilenet_V2(without tuple)? |
In mmdet, old version is : |
Hi, I run the speedup module on the MobilenetV2 you gave me, and it works fine. Please check if you can run the following code successfully?
So, I guess the problem you met is caused by
Because we can speedup the mobilenet you give me and the one in torchvision successfully, so I think this is not caused by speedup's bug. I can only see your code snippets. Without the complete code and environment, it is difficult to see whether it is cause by your code writing or there is some conflict between |
Thanks a lot. I will look into mmdet try to find bugs of it. |
@YoungSharp I’m closing this issue as it has no updates from user for 3 months, please feel free to reopen if you are still seeing it an active issue. |
Hi @YoungSharp , I am trying to integerate nni into mmdetection. May I ask have you successfully applied nni compression/speedup to any object detection models in mmdetection? Thanks! |
Environment: linux 16.04
Log message:
What issue meet, what's expected?:
Using nni "SlimPruner" method to speedup mobilenetv2, and got error "Exception: Mask conflict happenes!";
To find the conflict layer, I changed the code in following:
`def find_successors(self, unique_name):
"""
Find successor nodes of the given node
Parameters
----------
unique_name : str
The unique name of the node
And got the following log:
'''
successors = ['backbone.conv1.2']
successors = ['backbone.stage1.0.conv.0.0']
successors = ['backbone.stage1.0.conv.0.2']
successors = ['backbone.stage1.0.conv.1']
successors = ['backbone.stage2.0.conv.0.0']
successors = ['backbone.stage2.0.conv.0.2']
successors = ['backbone.stage2.0.conv.1.0']
successors = ['backbone.stage2.0.conv.1.2']
successors = ['backbone.stage2.0.conv.2']
successors = ['backbone.stage2.1.conv.0.0', 'backbone.stage2.1.aten::add.138']
successors = ['backbone.stage3.0.conv.0.0']
successors = ['backbone.stage2.1.conv.0.2']
successors = ['backbone.stage2.1.conv.1.0']
successors = ['backbone.stage2.1.conv.1.2']
successors = ['backbone.stage2.1.conv.2']
successors = ['backbone.stage2.1.aten::add.138']
Traceback (most recent call last):
File "tools/compress_onnx_export.py", line 164, in
main()
File "tools/compress_onnx_export.py", line 137, in main
m_speedup.speedup_model()
File "speedup/compressor.py", line 187, in speedup_model
self.infer_modules_masks()
File "/speedup/compressor.py", line 146, in infer_modules_masks
self.infer_module_mask(module_name, None, mask=mask)
File "/speedup/compressor.py", line 138, in infer_module_mask
self.infer_module_mask(_module_name, module_name, in_shape=output_cmask)
File "/speedup/compressor.py", line 122, in infer_module_mask
output_cmask = infer_from_inshape[m_type](module_masks, in_shape)
File "//speedup/infer_shape.py", line 240, in
'aten::add': lambda module_mask, mask: add_inshape(module_mask, mask),
File "//speedup/infer_shape.py", line 364, in add_inshape
raise Exception('Mask conflict happenes!')
Exception: Mask conflict happenes!
'''
How to reproduce it?:
Additional information:
The text was updated successfully, but these errors were encountered: