-
Notifications
You must be signed in to change notification settings - Fork 2.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Could not open file mnist.onnx #228
Comments
Hi @chegnyanjun, Can you provide more information? Does the Any errors after that are likely caused from the |
Hi,@rmccorm4 ,thanks for you reply.
As some issues talks,maybe there has some unsupported layer that the tensorrt not supports.But actually, your know the unet++ model(I train it by pytorch) is a very simple model , and i set the I use
So i was a little weird, Didn't tensorrt6.0.1.5 says it supports the upsample in nearest mode? Besides,I thinks the parse indicate info is very little in tensorrt inner api,i hope it richer to help people quicker to find the problem. Also, i hope tensorrt can release more pytorch model apply demo,such as sementic segmentation, gan and so on. It is such little so i was spend so much time to get them. At last, i think there has some bug in tensorrt c++ api official doc should be solved and both c++ api or python api doc should more detailed. Hope for your reply. Greatly appreciated for your patience to read this issues. tkx. |
Hi @chegnyanjun, If you share both of your:
I can take a look at this when I get a chance tomorrow |
@rmccorm4,thanks very much.There is my pytorch model and onnx model. import torch.nn as nn
import torch
import numpy as np
from skimage.io import imsave
class ConvBlock(nn.Module):
"""
Convolution Block
"""
def __init__(self, in_ch, out_ch, kernel_size=3, stride=1, padding=1, dilation=1):
super(ConvBlock, self).__init__()
block = [nn.Conv2d(in_channels=in_ch, out_channels=out_ch, kernel_size=kernel_size, stride=stride, padding=padding, dilation=dilation, bias=True)]
block += [nn.BatchNorm2d(out_channels=out_ch)]
block += [nn.ReLU(inplace=True)]
self.block = nn.Sequential(*block)
def forward(self, x):
return self.block(x)
class UpConvBlock(nn.Module):
"""
Upsampling Convolution Block
"""
def __init__(self, in_ch, out_ch, kernel_size=3, stride=1, padding=1, dilation=1):
super(UpConvBlock, self).__init__()
block = [nn.Upsample(scale_factor=2)]
block += [nn.Conv2d(in_channels=in_ch, out_channels=out_ch, kernel_size=kernel_size, stride=stride, padding=padding, dilation=dilation, bias=True)]
block += [nn.BatchNorm2d(out_channels=out_ch)]
block += [nn.ReLU(inplace=True)]
self.block = nn.Sequential(*block)
def forward(self, x):
return self.block(x)
class ResBlock(nn.Module):
"""
Residual Block
"""
def __init__(self, in_ch, out_ch, stride=1):
super(ResBlock, self).__init__()
self.relu = nn.ReLU(inplace=True)
self.conv1 = nn.Conv2d(in_channels=in_ch, out_channels=out_ch, kernel_size=3, stride=stride, padding=1, dilation=1, bias=True)
self.bn1 = nn.BatchNorm2d(out_ch)
self.conv2 = nn.Conv2d(in_channels=out_ch, out_channels=out_ch, kernel_size=3, stride=stride, padding=1, dilation=1, bias=True)
self.bn2 = nn.BatchNorm2d(out_ch)
def forward(self, x):
identity = self.conv1(x)
out = self.bn1(identity)
out = self.relu(out)
out = self.conv2(out)
out = self.bn2(out)
out += identity
out = self.relu(out)
return out
class NestedUNet(nn.Module):
"""
Implementation of nested Unet (Unet++)
"""
def __init__(self, in_ch=3, out_ch=1, n=32):
super(NestedUNet, self).__init__()
filters = [n, n * 2, n * 4, n * 8, n * 16]#32, 64, 128, 256, 512
self.conv0_0 = ResBlock(in_ch, filters[0])
self.conv1_0 = ResBlock(filters[0], filters[1])
self.conv2_0 = ResBlock(filters[1], filters[2])
self.conv3_0 = ResBlock(filters[2], filters[3])
self.conv4_0 = ResBlock(filters[3], filters[4])
self.conv0_1 = ResBlock(filters[0] + filters[1], filters[0])
self.conv1_1 = ResBlock(filters[1] + filters[2], filters[1])
self.conv2_1 = ResBlock(filters[2] + filters[3], filters[2])
self.conv3_1 = ResBlock(filters[3] + filters[4], filters[3])
self.conv0_2 = ResBlock(filters[0] * 2 + filters[1], filters[0])
self.conv1_2 = ResBlock(filters[1] * 2 + filters[2], filters[1])
self.conv2_2 = ResBlock(filters[2] * 2 + filters[3], filters[2])
self.conv0_3 = ResBlock(filters[0] * 3 + filters[1], filters[0])
self.conv1_3 = ResBlock(filters[1] * 3 + filters[2], filters[1])
self.conv0_4 = ResBlock(filters[0] * 4 + filters[1], filters[0])
self.pool = nn.MaxPool2d(kernel_size=2, stride=2)
# self.up = nn.Upsample(scale_factor=2, mode='bilinear', align_corners=True)# original code
self.up = nn.Upsample(scale_factor=2, mode='nearest')
self.final1 = nn.Conv2d(filters[0], out_ch, kernel_size=1)
self.final2 = nn.Conv2d(filters[0], out_ch, kernel_size=1)
self.final3 = nn.Conv2d(filters[0], out_ch, kernel_size=1)
self.final4 = nn.Conv2d(filters[0], out_ch, kernel_size=1)
self.final5 = nn.Conv2d(4, out_ch, kernel_size=1)
def forward(self, x):
x0_0 = self.conv0_0(x)
x1_0 = self.conv1_0(self.pool(x0_0))
x0_1 = self.conv0_1(torch.cat((x0_0, self.up(x1_0)), 1))
x2_0 = self.conv2_0(self.pool(x1_0))
x1_1 = self.conv1_1(torch.cat((x1_0, self.up(x2_0)), 1))
x0_2 = self.conv0_2(torch.cat((x0_0, x0_1, self.up(x1_1)), 1))
x3_0 = self.conv3_0(self.pool(x2_0))
x2_1 = self.conv2_1(torch.cat((x2_0, self.up(x3_0)), 1))
x1_2 = self.conv1_2(torch.cat((x1_0, x1_1, self.up(x2_1)), 1))
x0_3 = self.conv0_3(torch.cat((x0_0, x0_1, x0_2, self.up(x1_2)), 1))
x4_0 = self.conv4_0(self.pool(x3_0))
x3_1 = self.conv3_1(torch.cat((x3_0, self.up(x4_0)), 1))
x2_2 = self.conv2_2(torch.cat((x2_0, x2_1, self.up(x3_1)), 1))
x1_3 = self.conv1_3(torch.cat((x1_0, x1_1, x1_2, self.up(x2_2)), 1))
x0_4 = self.conv0_4(torch.cat((x0_0, x0_1, x0_2, x0_3, self.up(x1_3)), 1))
output1 = self.final1(x0_1) # 4*1*256*256
output2 = self.final2(x0_2) # 4*1*256*256
output3 = self.final3(x0_3) # 4*1*256*256
output4 = self.final4(x0_4) # 4*1*256*256
output5 = self.final5(torch.cat((output1, output2, output3, output4), 1)) # 4*1*256*256
# return [output1, output2, output3, output4, output5] # return a list,it is not good for torch.git.trace,should replace it with a tuple
return (output1, output2, output3, output4, output5) # return a tuple
if __name__ == '__main__':
a = torch.ones((16, 6, 256, 256))
Unet = NestedUNet(in_ch=6, out_ch=1)
output = Unet(a) my onnx model, I located it in the link: |
Hi @chegnyanjun, Sorry for the delay, I've been very busy this week. I'll try to take a look next week. |
@rmccorm4,Thanks for your enthusiastic,I can understand your feel and it doesn't matter.Wait for your reply,tkx. |
Hi @chegnyanjun,
|
Hi @rmccorm4,it's no problem.The log is:
save onnx model finish! def saveONNX(model, filepath): # pytorch --> onnx
input = torch.randn(1, 6, 256, 256, device='cuda')
input_names = ['input_merge']
output_names = ['outputs']
torch.onnx.export(model, input, filepath, verbose=True, input_names=input_names, output_names=output_names) the model is unet++ ,and the filepath is the save dir. Do you think where is the problem happend?Does the problem of my code?Hope for your reply. |
I played around with your model a bit, but I'm not sure of the exact fix. Issue seems to be coming from the Upsample op which is creating undesired Gather ops or something in the ONNX graph.
This post might help fix that if you know the expected output shape of the upsample: onnx/onnx-tensorrt#192 (comment) |
Yep,I also find maybe the problem come from the define of Upsample(it send out the problem of Gather ops in ONNX graph),but I check the supports ops in tensorrt6.0.1.5 for ONNX1.6, it represents the tensorrt6.0.1.5 supports Gather op in Onnx1.6,So maybe the tensortt6.0.1.5 does not supports very well for Upsample,right?.Besides,Thanks for your suggestion and i wiil try it,Thank you very much for your kind help. |
See this thread for looking into Upsample op issues: #284 |
Hi @chegnyanjun, Can you try to export your model with PyTorch 1.4 and then parse that model with TensorRT 7? According to @daquexian, an issue on pytorch side of things should've been fixed in v1.4 |
Hi @rmccorm4 |
Hi, I am trying to serialize the mnist.onnx to get engine file through c++ code,but it indicated such problem,so i was very weried and i want to know the reason.the whole problem is such:
Could not open file mnist.onnx.
Could not open file mnist.onnx.
[TRT] Network must have at least one output.
My environment is:ubuntu14.04+cuda10.1+gtx1060+qt5.5+onnx1.6+pytorch1.2+tensorrt6.0.1.5.
I hope for your reply.tkx.
The text was updated successfully, but these errors were encountered: