Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Cannot Run on Myriad Device #1

Open
umer-rasheed opened this issue Dec 17, 2018 · 9 comments
Open

Cannot Run on Myriad Device #1

umer-rasheed opened this issue Dec 17, 2018 · 9 comments

Comments

@umer-rasheed
Copy link

Hi

I changed the code (changed plugin device and changed model directory to FP16) to run the segmentation on NCS device and I am getting this error:
Traceback (most recent call last):
File "openvino_test_CPU-2.py", line 33, in
exec_net = plugin.load(network=net)
File "ie_api.pyx", line 305, in inference_engine.ie_api.IEPlugin.load
File "ie_api.pyx", line 318, in inference_engine.ie_api.IEPlugin.load
RuntimeError: Cannot convert layer "argmax" due to unsupported layer type "ArgMax"
/teamcity/work/scoring_engine_build/releases_openvino-2018-r4/ie_bridges/python/inference_engine/ie_api_impl.cpp:260

The default segmentation on CPU works just fine.
Kindly advise.

@PINTO0309
Copy link
Owner

@umer-rasheed

NCS and NCS 2 support very few layers.
First make the following changes,

plugin = IEPlugin(device="MYRIAD")
#plugin = IEPlugin(device="GPU")
#plugin = IEPlugin(device="CPU")

#plugin.add_cpu_extension("lib/libcpu_extension.so")
exec_net = plugin.load(network=net)

and, Please refer to the implementation of the next repository and offload the "ArgMax" layer.
However, since caffemodel is not disclosed from Intel, I think it is a very difficult task.
https://github.com/PINTO0309/OpenVINO-DeeplabV3/blob/master/openvino_deeplabv3_test.py
https://github.com/PINTO0309/OpenVINO-DeeplabV3

Probably you will need to train with your own dataset based on the following .prototxt.
I do not have much time so I do not plan to do my own training for now.
https://github.com/PINTO0309/OpenVINO-ADAS/blob/master/lrmodels/semantic-segmentation-adas-0001.prototxt

Same issue.
https://software.intel.com/en-us/forums/computer-vision/topic/801308

@PINTO0309 PINTO0309 pinned this issue Dec 17, 2018
@PINTO0309 PINTO0309 unpinned this issue Dec 17, 2018
@umer-rasheed
Copy link
Author

Hi Pinto,

Thanks a lot for the quick response.
I just needed to check whether layers for FCN including transposed convolution, concatenation actually works on NCS with OpenVino since I had no luck with NCSDK in the past.
The issue you shared at the bottom is shared by me. It doesn't seem to have anything to do with supported layers though.

@PINTO0309
Copy link
Owner

@umer-rasheed

The issue you shared at the bottom is shared by me. It doesn't seem to have anything to do with supported layers though.

It's exactly as you say.
I was misunderstanding.

concatenation actually works on NCS with OpenVino since I had no luck with NCSDK in the past.

I have not confirmed the operation of "transposed convolution", but I wish you good luck.
I think I heard somewhere that the OpenVINO internal API is almost the same as NCSDK.
But, It has been confirmed that some models that did not work with NCSDK will work on OpenVINO.
For example,
https://github.com/PINTO0309/TensorflowLite-UNet.git

A list of layers supported by OpenVINO is listed below.
https://github.com/PINTO0309/OpenVINO-YoloV3#openvino-supported-layers-as-of-dec-16-2018

It might be meaningless information for you, I know that using Intel's CPU will show better performance than NCS2.
I have discontinued development by NCS2 until OpenVINO is compatible with ARM processors.
openvinotoolkit/openvino#3

@umer-rasheed
Copy link
Author

Hi Pinto,

Thanks again for prompt reply. I gotta say I got more info from you than Intel itself.
Just a quick question. I just tested your repo:
https://github.com/PINTO0309/ICNet-tensorflow

And although it is working fine. The performance of the inference, however, is not good. In fact it seems that the network is not trained at all and generating random results. Is your model trained or just initialized and converted to IR?

Thanks again.

Umer

@PINTO0309
Copy link
Owner

PINTO0309 commented Dec 18, 2018

@umer-rasheed

Is your model trained or just initialized and converted to IR?

Unfortunately, this repository is the only repository I failed to convert.
I used a trained model.
I spent a week on converting, but it did not work.
Probably, Tricky customization is required like "OpenVINO-DeeplabV3+".
By the way, although I am currently trying to convert tiny-YoloV3, I have not succeeded yet.

@ikrets
Copy link

ikrets commented Mar 8, 2019

I had the similar problem when trying to convert deeplab3 - mobilenet2. The only layer out of that model that's unsupported by MYRIAD at the moment is Argmax. When you convert the model to IR format, you can specify the resized logits of the model as output:

python3 /opt/intel/computer_vision_sdk/deployment_tools/model_optimizer/mo_tf.py \
--input_model frozen_inference_graph.pb \
--input 0:MobilenetV2/Conv/Conv2D --output ResizeBilinear_2 \
--input_shape [1,512,512,3] \
--data_type FP16 
--output_dir .

That way you'll get a [512, 512, number_of_classes] output which you can then argmax yourself outside of MYRIAD.

@PINTO0309
Copy link
Owner

@ikrets
Thank you for your comment.
Yes. Actually, as you said, I was aware from the beginning that Argmax is a problem.
Unfortunately, however, the model before IR conversion is not published from Intel.

@PINTO0309
Copy link
Owner

PINTO0309 commented Mar 10, 2019

@umer-rasheed
@ikrets

I changed the model by myself and excluded Argmax, but NCS / NCS 2 hangs during reasoning.
The OpenVINO API seems to have problems.

.xml
<?xml version="1.0" ?>
<net batch="1" name="semantic-segmentation-adas-0001" version="4">
	<layers>
		<layer id="0" name="data" precision="FP16" type="Input">
			<output>
				<port id="0">
					<dim>1</dim>
					<dim>3</dim>
					<dim>256</dim>
					<dim>512</dim>
				</port>
			</output>
		</layer>
		<layer id="1" name="Mul_/Fused_Mul_/FusedScaleShift_" precision="FP16" type="ScaleShift">
			<input>
				<port id="0">
					<dim>1</dim>
					<dim>3</dim>
					<dim>256</dim>
					<dim>512</dim>
				</port>
			</input>
			<output>
				<port id="3">
					<dim>1</dim>
					<dim>3</dim>
					<dim>256</dim>
					<dim>512</dim>
				</port>
			</output>
			<blobs>
				<weights offset="0" size="6"/>
				<biases offset="6" size="6"/>
			</blobs>
		</layer>
		<layer id="2" name="AvgPool2DBackward2" precision="FP16" type="Pooling">
			<data exclude-pad="false" kernel="2,2" pads_begin="0,0" pads_end="0,0" pool-method="avg" rounding_type="ceil" strides="2,2"/>
			<input>
				<port id="0">
					<dim>1</dim>
					<dim>3</dim>
					<dim>256</dim>
					<dim>512</dim>
				</port>
			</input>
			<output>
				<port id="1">
					<dim>1</dim>
					<dim>3</dim>
					<dim>128</dim>
					<dim>256</dim>
				</port>
			</output>
		</layer>
		<layer id="3" name="ConvNdBackward3" precision="FP16" type="Convolution">
			<data dilations="1,1" group="1" kernel="3,3" output="32" pads_begin="1,1" pads_end="1,1" strides="2,2"/>
			<input>
				<port id="0">
					<dim>1</dim>
					<dim>3</dim>
					<dim>128</dim>
					<dim>256</dim>
				</port>
			</input>
			<output>
				<port id="3">
					<dim>1</dim>
					<dim>32</dim>
					<dim>64</dim>
					<dim>128</dim>
				</port>
			</output>
			<blobs>
				<weights offset="12" size="1728"/>
				<biases offset="1740" size="64"/>
			</blobs>
		</layer>
		<layer id="4" name="ThresholdBackward5" precision="FP16" type="ReLU">
			<data negative_slope="0.0"/>
			<input>
				<port id="0">
					<dim>1</dim>
					<dim>32</dim>
					<dim>64</dim>
					<dim>128</dim>
				</port>
			</input>
			<output>
				<port id="1">
					<dim>1</dim>
					<dim>32</dim>
					<dim>64</dim>
					<dim>128</dim>
				</port>
			</output>
		</layer>
		<layer id="5" name="ConvNdBackward6" precision="FP16" type="Convolution">
			<data dilations="1,1" group="1" kernel="3,3" output="32" pads_begin="1,1" pads_end="1,1" strides="1,1"/>
			<input>
				<port id="0">
					<dim>1</dim>
					<dim>32</dim>
					<dim>64</dim>
					<dim>128</dim>
				</port>
			</input>
			<output>
				<port id="3">
					<dim>1</dim>
					<dim>32</dim>
					<dim>64</dim>
					<dim>128</dim>
				</port>
			</output>
			<blobs>
				<weights offset="1804" size="18432"/>
				<biases offset="20236" size="64"/>
			</blobs>
		</layer>
		<layer id="6" name="ThresholdBackward8" precision="FP16" type="ReLU">
			<data negative_slope="0.0"/>
			<input>
				<port id="0">
					<dim>1</dim>
					<dim>32</dim>
					<dim>64</dim>
					<dim>128</dim>
				</port>
			</input>
			<output>
				<port id="1">
					<dim>1</dim>
					<dim>32</dim>
					<dim>64</dim>
					<dim>128</dim>
				</port>
			</output>
		</layer>
		<layer id="7" name="ConvNdBackward9" precision="FP16" type="Convolution">
			<data dilations="1,1" group="1" kernel="3,3" output="64" pads_begin="1,1" pads_end="1,1" strides="1,1"/>
			<input>
				<port id="0">
					<dim>1</dim>
					<dim>32</dim>
					<dim>64</dim>
					<dim>128</dim>
				</port>
			</input>
			<output>
				<port id="3">
					<dim>1</dim>
					<dim>64</dim>
					<dim>64</dim>
					<dim>128</dim>
				</port>
			</output>
			<blobs>
				<weights offset="20300" size="36864"/>
				<biases offset="57164" size="128"/>
			</blobs>
		</layer>
		<layer id="8" name="ThresholdBackward11" precision="FP16" type="ReLU">
			<data negative_slope="0.0"/>
			<input>
				<port id="0">
					<dim>1</dim>
					<dim>64</dim>
					<dim>64</dim>
					<dim>128</dim>
				</port>
			</input>
			<output>
				<port id="1">
					<dim>1</dim>
					<dim>64</dim>
					<dim>64</dim>
					<dim>128</dim>
				</port>
			</output>
		</layer>
		<layer id="9" name="MaxPool2DBackward12" precision="FP16" type="Pooling">
			<data exclude-pad="true" kernel="3,3" pads_begin="0,0" pads_end="0,0" pool-method="max" rounding_type="ceil" strides="2,2"/>
			<input>
				<port id="0">
					<dim>1</dim>
					<dim>64</dim>
					<dim>64</dim>
					<dim>128</dim>
				</port>
			</input>
			<output>
				<port id="1">
					<dim>1</dim>
					<dim>64</dim>
					<dim>32</dim>
					<dim>64</dim>
				</port>
			</output>
		</layer>
		<layer id="10" name="ConvNdBackward13" precision="FP16" type="Convolution">
			<data dilations="1,1" group="1" kernel="1,1" output="32" pads_begin="0,0" pads_end="0,0" strides="1,1"/>
			<input>
				<port id="0">
					<dim>1</dim>
					<dim>64</dim>
					<dim>32</dim>
					<dim>64</dim>
				</port>
			</input>
			<output>
				<port id="3">
					<dim>1</dim>
					<dim>32</dim>
					<dim>32</dim>
					<dim>64</dim>
				</port>
			</output>
			<blobs>
				<weights offset="57292" size="4096"/>
				<biases offset="61388" size="64"/>
			</blobs>
		</layer>
		<layer id="11" name="ThresholdBackward15" precision="FP16" type="ReLU">
			<data negative_slope="0.0"/>
			<input>
				<port id="0">
					<dim>1</dim>
					<dim>32</dim>
					<dim>32</dim>
					<dim>64</dim>
				</port>
			</input>
			<output>
				<port id="1">
					<dim>1</dim>
					<dim>32</dim>
					<dim>32</dim>
					<dim>64</dim>
				</port>
			</output>
		</layer>
		<layer id="12" name="ConvNdBackward16" precision="FP16" type="Convolution">
			<data dilations="1,1" group="1" kernel="3,3" output="32" pads_begin="1,1" pads_end="1,1" strides="1,1"/>
			<input>
				<port id="0">
					<dim>1</dim>
					<dim>32</dim>
					<dim>32</dim>
					<dim>64</dim>
				</port>
			</input>
			<output>
				<port id="3">
					<dim>1</dim>
					<dim>32</dim>
					<dim>32</dim>
					<dim>64</dim>
				</port>
			</output>
			<blobs>
				<weights offset="61452" size="18432"/>
				<biases offset="79884" size="64"/>
			</blobs>
		</layer>
		<layer id="13" name="ThresholdBackward18" precision="FP16" type="ReLU">
			<data negative_slope="0.0"/>
			<input>
				<port id="0">
					<dim>1</dim>
					<dim>32</dim>
					<dim>32</dim>
					<dim>64</dim>
				</port>
			</input>
			<output>
				<port id="1">
					<dim>1</dim>
					<dim>32</dim>
					<dim>32</dim>
					<dim>64</dim>
				</port>
			</output>
		</layer>
		<layer id="14" name="ConvNdBackward19" precision="FP16" type="Convolution">
			<data dilations="1,1" group="1" kernel="1,1" output="128" pads_begin="0,0" pads_end="0,0" strides="1,1"/>
			<input>
				<port id="0">
					<dim>1</dim>
					<dim>32</dim>
					<dim>32</dim>
					<dim>64</dim>
				</port>
			</input>
			<output>
				<port id="3">
					<dim>1</dim>
					<dim>128</dim>
					<dim>32</dim>
					<dim>64</dim>
				</port>
			</output>
			<blobs>
				<weights offset="79948" size="8192"/>
				<biases offset="88140" size="256"/>
			</blobs>
		</layer>
		<layer id="15" name="ConvNdBackward22" precision="FP16" type="Convolution">
			<data dilations="1,1" group="1" kernel="1,1" output="128" pads_begin="0,0" pads_end="0,0" strides="1,1"/>
			<input>
				<port id="0">
					<dim>1</dim>
					<dim>64</dim>
					<dim>32</dim>
					<dim>64</dim>
				</port>
			</input>
			<output>
				<port id="3">
					<dim>1</dim>
					<dim>128</dim>
					<dim>32</dim>
					<dim>64</dim>
				</port>
			</output>
			<blobs>
				<weights offset="88396" size="16384"/>
				<biases offset="104780" size="256"/>
			</blobs>
		</layer>
		<layer id="16" name="AddBackward124" precision="FP16" type="Eltwise">
			<data coeff="" operation="sum"/>
			<input>
				<port id="0">
					<dim>1</dim>
					<dim>128</dim>
					<dim>32</dim>
					<dim>64</dim>
				</port>
				<port id="1">
					<dim>1</dim>
					<dim>128</dim>
					<dim>32</dim>
					<dim>64</dim>
				</port>
			</input>
			<output>
				<port id="2">
					<dim>1</dim>
					<dim>128</dim>
					<dim>32</dim>
					<dim>64</dim>
				</port>
			</output>
		</layer>
		<layer id="17" name="ThresholdBackward25" precision="FP16" type="ReLU">
			<data negative_slope="0.0"/>
			<input>
				<port id="0">
					<dim>1</dim>
					<dim>128</dim>
					<dim>32</dim>
					<dim>64</dim>
				</port>
			</input>
			<output>
				<port id="1">
					<dim>1</dim>
					<dim>128</dim>
					<dim>32</dim>
					<dim>64</dim>
				</port>
			</output>
		</layer>
		<layer id="18" name="ConvNdBackward26" precision="FP16" type="Convolution">
			<data dilations="1,1" group="1" kernel="1,1" output="32" pads_begin="0,0" pads_end="0,0" strides="1,1"/>
			<input>
				<port id="0">
					<dim>1</dim>
					<dim>128</dim>
					<dim>32</dim>
					<dim>64</dim>
				</port>
			</input>
			<output>
				<port id="3">
					<dim>1</dim>
					<dim>32</dim>
					<dim>32</dim>
					<dim>64</dim>
				</port>
			</output>
			<blobs>
				<weights offset="105036" size="8192"/>
				<biases offset="113228" size="64"/>
			</blobs>
		</layer>
		<layer id="19" name="ThresholdBackward28" precision="FP16" type="ReLU">
			<data negative_slope="0.0"/>
			<input>
				<port id="0">
					<dim>1</dim>
					<dim>32</dim>
					<dim>32</dim>
					<dim>64</dim>
				</port>
			</input>
			<output>
				<port id="1">
					<dim>1</dim>
					<dim>32</dim>
					<dim>32</dim>
					<dim>64</dim>
				</port>
			</output>
		</layer>
		<layer id="20" name="ConvNdBackward29" precision="FP16" type="Convolution">
			<data dilations="1,1" group="1" kernel="3,3" output="32" pads_begin="1,1" pads_end="1,1" strides="1,1"/>
			<input>
				<port id="0">
					<dim>1</dim>
					<dim>32</dim>
					<dim>32</dim>
					<dim>64</dim>
				</port>
			</input>
			<output>
				<port id="3">
					<dim>1</dim>
					<dim>32</dim>
					<dim>32</dim>
					<dim>64</dim>
				</port>
			</output>
			<blobs>
				<weights offset="113292" size="18432"/>
				<biases offset="131724" size="64"/>
			</blobs>
		</layer>
		<layer id="21" name="ThresholdBackward31" precision="FP16" type="ReLU">
			<data negative_slope="0.0"/>
			<input>
				<port id="0">
					<dim>1</dim>
					<dim>32</dim>
					<dim>32</dim>
					<dim>64</dim>
				</port>
			</input>
			<output>
				<port id="1">
					<dim>1</dim>
					<dim>32</dim>
					<dim>32</dim>
					<dim>64</dim>
				</port>
			</output>
		</layer>
		<layer id="22" name="ConvNdBackward32" precision="FP16" type="Convolution">
			<data dilations="1,1" group="1" kernel="1,1" output="128" pads_begin="0,0" pads_end="0,0" strides="1,1"/>
			<input>
				<port id="0">
					<dim>1</dim>
					<dim>32</dim>
					<dim>32</dim>
					<dim>64</dim>
				</port>
			</input>
			<output>
				<port id="3">
					<dim>1</dim>
					<dim>128</dim>
					<dim>32</dim>
					<dim>64</dim>
				</port>
			</output>
			<blobs>
				<weights offset="131788" size="8192"/>
				<biases offset="139980" size="256"/>
			</blobs>
		</layer>
		<layer id="23" name="AddBackward135" precision="FP16" type="Eltwise">
			<data coeff="" operation="sum"/>
			<input>
				<port id="0">
					<dim>1</dim>
					<dim>128</dim>
					<dim>32</dim>
					<dim>64</dim>
				</port>
				<port id="1">
					<dim>1</dim>
					<dim>128</dim>
					<dim>32</dim>
					<dim>64</dim>
				</port>
			</input>
			<output>
				<port id="2">
					<dim>1</dim>
					<dim>128</dim>
					<dim>32</dim>
					<dim>64</dim>
				</port>
			</output>
		</layer>
		<layer id="24" name="ThresholdBackward36" precision="FP16" type="ReLU">
			<data negative_slope="0.0"/>
			<input>
				<port id="0">
					<dim>1</dim>
					<dim>128</dim>
					<dim>32</dim>
					<dim>64</dim>
				</port>
			</input>
			<output>
				<port id="1">
					<dim>1</dim>
					<dim>128</dim>
					<dim>32</dim>
					<dim>64</dim>
				</port>
			</output>
		</layer>
		<layer id="25" name="ConvNdBackward37" precision="FP16" type="Convolution">
			<data dilations="1,1" group="1" kernel="1,1" output="32" pads_begin="0,0" pads_end="0,0" strides="1,1"/>
			<input>
				<port id="0">
					<dim>1</dim>
					<dim>128</dim>
					<dim>32</dim>
					<dim>64</dim>
				</port>
			</input>
			<output>
				<port id="3">
					<dim>1</dim>
					<dim>32</dim>
					<dim>32</dim>
					<dim>64</dim>
				</port>
			</output>
			<blobs>
				<weights offset="140236" size="8192"/>
				<biases offset="148428" size="64"/>
			</blobs>
		</layer>
		<layer id="26" name="ThresholdBackward39" precision="FP16" type="ReLU">
			<data negative_slope="0.0"/>
			<input>
				<port id="0">
					<dim>1</dim>
					<dim>32</dim>
					<dim>32</dim>
					<dim>64</dim>
				</port>
			</input>
			<output>
				<port id="1">
					<dim>1</dim>
					<dim>32</dim>
					<dim>32</dim>
					<dim>64</dim>
				</port>
			</output>
		</layer>
		<layer id="27" name="ConvNdBackward40" precision="FP16" type="Convolution">
			<data dilations="1,1" group="1" kernel="3,3" output="32" pads_begin="1,1" pads_end="1,1" strides="2,2"/>
			<input>
				<port id="0">
					<dim>1</dim>
					<dim>32</dim>
					<dim>32</dim>
					<dim>64</dim>
				</port>
			</input>
			<output>
				<port id="3">
					<dim>1</dim>
					<dim>32</dim>
					<dim>16</dim>
					<dim>32</dim>
				</port>
			</output>
			<blobs>
				<weights offset="148492" size="18432"/>
				<biases offset="166924" size="64"/>
			</blobs>
		</layer>
		<layer id="28" name="ThresholdBackward42" precision="FP16" type="ReLU">
			<data negative_slope="0.0"/>
			<input>
				<port id="0">
					<dim>1</dim>
					<dim>32</dim>
					<dim>16</dim>
					<dim>32</dim>
				</port>
			</input>
			<output>
				<port id="1">
					<dim>1</dim>
					<dim>32</dim>
					<dim>16</dim>
					<dim>32</dim>
				</port>
			</output>
		</layer>
		<layer id="29" name="ConvNdBackward43" precision="FP16" type="Convolution">
			<data dilations="1,1" group="1" kernel="1,1" output="128" pads_begin="0,0" pads_end="0,0" strides="1,1"/>
			<input>
				<port id="0">
					<dim>1</dim>
					<dim>32</dim>
					<dim>16</dim>
					<dim>32</dim>
				</port>
			</input>
			<output>
				<port id="3">
					<dim>1</dim>
					<dim>128</dim>
					<dim>16</dim>
					<dim>32</dim>
				</port>
			</output>
			<blobs>
				<weights offset="166988" size="8192"/>
				<biases offset="175180" size="256"/>
			</blobs>
		</layer>
		<layer id="30" name="Pooling_" precision="FP16" type="Pooling">
			<data kernel="1,1" pads_begin="0,0" pads_end="0,0" pool-method="max" strides="2,2"/>
			<input>
				<port id="0">
					<dim>1</dim>
					<dim>128</dim>
					<dim>32</dim>
					<dim>64</dim>
				</port>
			</input>
			<output>
				<port id="1">
					<dim>1</dim>
					<dim>128</dim>
					<dim>16</dim>
					<dim>32</dim>
				</port>
			</output>
		</layer>
		<layer id="31" name="AddBackward146" precision="FP16" type="Eltwise">
			<data coeff="" operation="sum"/>
			<input>
				<port id="0">
					<dim>1</dim>
					<dim>128</dim>
					<dim>16</dim>
					<dim>32</dim>
				</port>
				<port id="1">
					<dim>1</dim>
					<dim>128</dim>
					<dim>16</dim>
					<dim>32</dim>
				</port>
			</input>
			<output>
				<port id="2">
					<dim>1</dim>
					<dim>128</dim>
					<dim>16</dim>
					<dim>32</dim>
				</port>
			</output>
		</layer>
		<layer id="32" name="ThresholdBackward47" precision="FP16" type="ReLU">
			<data negative_slope="0.0"/>
			<input>
				<port id="0">
					<dim>1</dim>
					<dim>128</dim>
					<dim>16</dim>
					<dim>32</dim>
				</port>
			</input>
			<output>
				<port id="1">
					<dim>1</dim>
					<dim>128</dim>
					<dim>16</dim>
					<dim>32</dim>
				</port>
			</output>
		</layer>
		<layer id="33" name="ConvNdBackward48" precision="FP16" type="Convolution">
			<data dilations="1,1" group="1" kernel="1,1" output="64" pads_begin="0,0" pads_end="0,0" strides="1,1"/>
			<input>
				<port id="0">
					<dim>1</dim>
					<dim>128</dim>
					<dim>16</dim>
					<dim>32</dim>
				</port>
			</input>
			<output>
				<port id="3">
					<dim>1</dim>
					<dim>64</dim>
					<dim>16</dim>
					<dim>32</dim>
				</port>
			</output>
			<blobs>
				<weights offset="175436" size="16384"/>
				<biases offset="191820" size="128"/>
			</blobs>
		</layer>
		<layer id="34" name="ThresholdBackward50" precision="FP16" type="ReLU">
			<data negative_slope="0.0"/>
			<input>
				<port id="0">
					<dim>1</dim>
					<dim>64</dim>
					<dim>16</dim>
					<dim>32</dim>
				</port>
			</input>
			<output>
				<port id="1">
					<dim>1</dim>
					<dim>64</dim>
					<dim>16</dim>
					<dim>32</dim>
				</port>
			</output>
		</layer>
		<layer id="35" name="ConvNdBackward51" precision="FP16" type="Convolution">
			<data dilations="1,1" group="1" kernel="3,3" output="64" pads_begin="1,1" pads_end="1,1" strides="1,1"/>
			<input>
				<port id="0">
					<dim>1</dim>
					<dim>64</dim>
					<dim>16</dim>
					<dim>32</dim>
				</port>
			</input>
			<output>
				<port id="3">
					<dim>1</dim>
					<dim>64</dim>
					<dim>16</dim>
					<dim>32</dim>
				</port>
			</output>
			<blobs>
				<weights offset="191948" size="73728"/>
				<biases offset="265676" size="128"/>
			</blobs>
		</layer>
		<layer id="36" name="ThresholdBackward53" precision="FP16" type="ReLU">
			<data negative_slope="0.0"/>
			<input>
				<port id="0">
					<dim>1</dim>
					<dim>64</dim>
					<dim>16</dim>
					<dim>32</dim>
				</port>
			</input>
			<output>
				<port id="1">
					<dim>1</dim>
					<dim>64</dim>
					<dim>16</dim>
					<dim>32</dim>
				</port>
			</output>
		</layer>
		<layer id="37" name="ConvNdBackward54" precision="FP16" type="Convolution">
			<data dilations="1,1" group="1" kernel="1,1" output="256" pads_begin="0,0" pads_end="0,0" strides="1,1"/>
			<input>
				<port id="0">
					<dim>1</dim>
					<dim>64</dim>
					<dim>16</dim>
					<dim>32</dim>
				</port>
			</input>
			<output>
				<port id="3">
					<dim>1</dim>
					<dim>256</dim>
					<dim>16</dim>
					<dim>32</dim>
				</port>
			</output>
			<blobs>
				<weights offset="265804" size="32768"/>
				<biases offset="298572" size="512"/>
			</blobs>
		</layer>
		<layer id="38" name="ConvNdBackward57" precision="FP16" type="Convolution">
			<data dilations="1,1" group="1" kernel="1,1" output="256" pads_begin="0,0" pads_end="0,0" strides="1,1"/>
			<input>
				<port id="0">
					<dim>1</dim>
					<dim>128</dim>
					<dim>16</dim>
					<dim>32</dim>
				</port>
			</input>
			<output>
				<port id="3">
					<dim>1</dim>
					<dim>256</dim>
					<dim>16</dim>
					<dim>32</dim>
				</port>
			</output>
			<blobs>
				<weights offset="299084" size="65536"/>
				<biases offset="364620" size="512"/>
			</blobs>
		</layer>
		<layer id="39" name="AddBackward159" precision="FP16" type="Eltwise">
			<data coeff="" operation="sum"/>
			<input>
				<port id="0">
					<dim>1</dim>
					<dim>256</dim>
					<dim>16</dim>
					<dim>32</dim>
				</port>
				<port id="1">
					<dim>1</dim>
					<dim>256</dim>
					<dim>16</dim>
					<dim>32</dim>
				</port>
			</input>
			<output>
				<port id="2">
					<dim>1</dim>
					<dim>256</dim>
					<dim>16</dim>
					<dim>32</dim>
				</port>
			</output>
		</layer>
		<layer id="40" name="ThresholdBackward60" precision="FP16" type="ReLU">
			<data negative_slope="0.0"/>
			<input>
				<port id="0">
					<dim>1</dim>
					<dim>256</dim>
					<dim>16</dim>
					<dim>32</dim>
				</port>
			</input>
			<output>
				<port id="1">
					<dim>1</dim>
					<dim>256</dim>
					<dim>16</dim>
					<dim>32</dim>
				</port>
			</output>
		</layer>
		<layer id="41" name="AvgPool2DBackward61" precision="FP16" type="Pooling">
			<data exclude-pad="false" kernel="2,2" pads_begin="0,0" pads_end="0,0" pool-method="avg" rounding_type="ceil" strides="2,2"/>
			<input>
				<port id="0">
					<dim>1</dim>
					<dim>256</dim>
					<dim>16</dim>
					<dim>32</dim>
				</port>
			</input>
			<output>
				<port id="1">
					<dim>1</dim>
					<dim>256</dim>
					<dim>8</dim>
					<dim>16</dim>
				</port>
			</output>
		</layer>
		<layer id="42" name="ConvNdBackward62" precision="FP16" type="Convolution">
			<data dilations="1,1" group="1" kernel="1,1" output="64" pads_begin="0,0" pads_end="0,0" strides="1,1"/>
			<input>
				<port id="0">
					<dim>1</dim>
					<dim>256</dim>
					<dim>8</dim>
					<dim>16</dim>
				</port>
			</input>
			<output>
				<port id="3">
					<dim>1</dim>
					<dim>64</dim>
					<dim>8</dim>
					<dim>16</dim>
				</port>
			</output>
			<blobs>
				<weights offset="365132" size="32768"/>
				<biases offset="397900" size="128"/>
			</blobs>
		</layer>
		<layer id="43" name="ThresholdBackward64" precision="FP16" type="ReLU">
			<data negative_slope="0.0"/>
			<input>
				<port id="0">
					<dim>1</dim>
					<dim>64</dim>
					<dim>8</dim>
					<dim>16</dim>
				</port>
			</input>
			<output>
				<port id="1">
					<dim>1</dim>
					<dim>64</dim>
					<dim>8</dim>
					<dim>16</dim>
				</port>
			</output>
		</layer>
		<layer id="44" name="ConvNdBackward65" precision="FP16" type="Convolution">
			<data dilations="1,1" group="1" kernel="3,3" output="64" pads_begin="1,1" pads_end="1,1" strides="1,1"/>
			<input>
				<port id="0">
					<dim>1</dim>
					<dim>64</dim>
					<dim>8</dim>
					<dim>16</dim>
				</port>
			</input>
			<output>
				<port id="3">
					<dim>1</dim>
					<dim>64</dim>
					<dim>8</dim>
					<dim>16</dim>
				</port>
			</output>
			<blobs>
				<weights offset="398028" size="73728"/>
				<biases offset="471756" size="128"/>
			</blobs>
		</layer>
		<layer id="45" name="ThresholdBackward67" precision="FP16" type="ReLU">
			<data negative_slope="0.0"/>
			<input>
				<port id="0">
					<dim>1</dim>
					<dim>64</dim>
					<dim>8</dim>
					<dim>16</dim>
				</port>
			</input>
			<output>
				<port id="1">
					<dim>1</dim>
					<dim>64</dim>
					<dim>8</dim>
					<dim>16</dim>
				</port>
			</output>
		</layer>
		<layer id="46" name="ConvNdBackward68" precision="FP16" type="Convolution">
			<data dilations="1,1" group="1" kernel="1,1" output="256" pads_begin="0,0" pads_end="0,0" strides="1,1"/>
			<input>
				<port id="0">
					<dim>1</dim>
					<dim>64</dim>
					<dim>8</dim>
					<dim>16</dim>
				</port>
			</input>
			<output>
				<port id="3">
					<dim>1</dim>
					<dim>256</dim>
					<dim>8</dim>
					<dim>16</dim>
				</port>
			</output>
			<blobs>
				<weights offset="471884" size="32768"/>
				<biases offset="504652" size="512"/>
			</blobs>
		</layer>
		<layer id="47" name="AddBackward171" precision="FP16" type="Eltwise">
			<data coeff="" operation="sum"/>
			<input>
				<port id="0">
					<dim>1</dim>
					<dim>256</dim>
					<dim>8</dim>
					<dim>16</dim>
				</port>
				<port id="1">
					<dim>1</dim>
					<dim>256</dim>
					<dim>8</dim>
					<dim>16</dim>
				</port>
			</input>
			<output>
				<port id="2">
					<dim>1</dim>
					<dim>256</dim>
					<dim>8</dim>
					<dim>16</dim>
				</port>
			</output>
		</layer>
		<layer id="48" name="ThresholdBackward72" precision="FP16" type="ReLU">
			<data negative_slope="0.0"/>
			<input>
				<port id="0">
					<dim>1</dim>
					<dim>256</dim>
					<dim>8</dim>
					<dim>16</dim>
				</port>
			</input>
			<output>
				<port id="1">
					<dim>1</dim>
					<dim>256</dim>
					<dim>8</dim>
					<dim>16</dim>
				</port>
			</output>
		</layer>
		<layer id="49" name="ConvNdBackward73" precision="FP16" type="Convolution">
			<data dilations="1,1" group="1" kernel="1,1" output="64" pads_begin="0,0" pads_end="0,0" strides="1,1"/>
			<input>
				<port id="0">
					<dim>1</dim>
					<dim>256</dim>
					<dim>8</dim>
					<dim>16</dim>
				</port>
			</input>
			<output>
				<port id="3">
					<dim>1</dim>
					<dim>64</dim>
					<dim>8</dim>
					<dim>16</dim>
				</port>
			</output>
			<blobs>
				<weights offset="505164" size="32768"/>
				<biases offset="537932" size="128"/>
			</blobs>
		</layer>
		<layer id="50" name="ThresholdBackward75" precision="FP16" type="ReLU">
			<data negative_slope="0.0"/>
			<input>
				<port id="0">
					<dim>1</dim>
					<dim>64</dim>
					<dim>8</dim>
					<dim>16</dim>
				</port>
			</input>
			<output>
				<port id="1">
					<dim>1</dim>
					<dim>64</dim>
					<dim>8</dim>
					<dim>16</dim>
				</port>
			</output>
		</layer>
		<layer id="51" name="ConvNdBackward76" precision="FP16" type="Convolution">
			<data dilations="1,1" group="1" kernel="3,3" output="64" pads_begin="1,1" pads_end="1,1" strides="1,1"/>
			<input>
				<port id="0">
					<dim>1</dim>
					<dim>64</dim>
					<dim>8</dim>
					<dim>16</dim>
				</port>
			</input>
			<output>
				<port id="3">
					<dim>1</dim>
					<dim>64</dim>
					<dim>8</dim>
					<dim>16</dim>
				</port>
			</output>
			<blobs>
				<weights offset="538060" size="73728"/>
				<biases offset="611788" size="128"/>
			</blobs>
		</layer>
		<layer id="52" name="ThresholdBackward78" precision="FP16" type="ReLU">
			<data negative_slope="0.0"/>
			<input>
				<port id="0">
					<dim>1</dim>
					<dim>64</dim>
					<dim>8</dim>
					<dim>16</dim>
				</port>
			</input>
			<output>
				<port id="1">
					<dim>1</dim>
					<dim>64</dim>
					<dim>8</dim>
					<dim>16</dim>
				</port>
			</output>
		</layer>
		<layer id="53" name="ConvNdBackward79" precision="FP16" type="Convolution">
			<data dilations="1,1" group="1" kernel="1,1" output="256" pads_begin="0,0" pads_end="0,0" strides="1,1"/>
			<input>
				<port id="0">
					<dim>1</dim>
					<dim>64</dim>
					<dim>8</dim>
					<dim>16</dim>
				</port>
			</input>
			<output>
				<port id="3">
					<dim>1</dim>
					<dim>256</dim>
					<dim>8</dim>
					<dim>16</dim>
				</port>
			</output>
			<blobs>
				<weights offset="611916" size="32768"/>
				<biases offset="644684" size="512"/>
			</blobs>
		</layer>
		<layer id="54" name="AddBackward182" precision="FP16" type="Eltwise">
			<data coeff="" operation="sum"/>
			<input>
				<port id="0">
					<dim>1</dim>
					<dim>256</dim>
					<dim>8</dim>
					<dim>16</dim>
				</port>
				<port id="1">
					<dim>1</dim>
					<dim>256</dim>
					<dim>8</dim>
					<dim>16</dim>
				</port>
			</input>
			<output>
				<port id="2">
					<dim>1</dim>
					<dim>256</dim>
					<dim>8</dim>
					<dim>16</dim>
				</port>
			</output>
		</layer>
		<layer id="55" name="ThresholdBackward83" precision="FP16" type="ReLU">
			<data negative_slope="0.0"/>
			<input>
				<port id="0">
					<dim>1</dim>
					<dim>256</dim>
					<dim>8</dim>
					<dim>16</dim>
				</port>
			</input>
			<output>
				<port id="1">
					<dim>1</dim>
					<dim>256</dim>
					<dim>8</dim>
					<dim>16</dim>
				</port>
			</output>
		</layer>
		<layer id="56" name="ConvNdBackward84" precision="FP16" type="Convolution">
			<data dilations="1,1" group="1" kernel="1,1" output="64" pads_begin="0,0" pads_end="0,0" strides="1,1"/>
			<input>
				<port id="0">
					<dim>1</dim>
					<dim>256</dim>
					<dim>8</dim>
					<dim>16</dim>
				</port>
			</input>
			<output>
				<port id="3">
					<dim>1</dim>
					<dim>64</dim>
					<dim>8</dim>
					<dim>16</dim>
				</port>
			</output>
			<blobs>
				<weights offset="645196" size="32768"/>
				<biases offset="677964" size="128"/>
			</blobs>
		</layer>
		<layer id="57" name="ThresholdBackward86" precision="FP16" type="ReLU">
			<data negative_slope="0.0"/>
			<input>
				<port id="0">
					<dim>1</dim>
					<dim>64</dim>
					<dim>8</dim>
					<dim>16</dim>
				</port>
			</input>
			<output>
				<port id="1">
					<dim>1</dim>
					<dim>64</dim>
					<dim>8</dim>
					<dim>16</dim>
				</port>
			</output>
		</layer>
		<layer id="58" name="ConvNdBackward87" precision="FP16" type="Convolution">
			<data dilations="1,1" group="1" kernel="3,3" output="64" pads_begin="1,1" pads_end="1,1" strides="1,1"/>
			<input>
				<port id="0">
					<dim>1</dim>
					<dim>64</dim>
					<dim>8</dim>
					<dim>16</dim>
				</port>
			</input>
			<output>
				<port id="3">
					<dim>1</dim>
					<dim>64</dim>
					<dim>8</dim>
					<dim>16</dim>
				</port>
			</output>
			<blobs>
				<weights offset="678092" size="73728"/>
				<biases offset="751820" size="128"/>
			</blobs>
		</layer>
		<layer id="59" name="ThresholdBackward89" precision="FP16" type="ReLU">
			<data negative_slope="0.0"/>
			<input>
				<port id="0">
					<dim>1</dim>
					<dim>64</dim>
					<dim>8</dim>
					<dim>16</dim>
				</port>
			</input>
			<output>
				<port id="1">
					<dim>1</dim>
					<dim>64</dim>
					<dim>8</dim>
					<dim>16</dim>
				</port>
			</output>
		</layer>
		<layer id="60" name="ConvNdBackward90" precision="FP16" type="Convolution">
			<data dilations="1,1" group="1" kernel="1,1" output="256" pads_begin="0,0" pads_end="0,0" strides="1,1"/>
			<input>
				<port id="0">
					<dim>1</dim>
					<dim>64</dim>
					<dim>8</dim>
					<dim>16</dim>
				</port>
			</input>
			<output>
				<port id="3">
					<dim>1</dim>
					<dim>256</dim>
					<dim>8</dim>
					<dim>16</dim>
				</port>
			</output>
			<blobs>
				<weights offset="751948" size="32768"/>
				<biases offset="784716" size="512"/>
			</blobs>
		</layer>
		<layer id="61" name="AddBackward193" precision="FP16" type="Eltwise">
			<data coeff="" operation="sum"/>
			<input>
				<port id="0">
					<dim>1</dim>
					<dim>256</dim>
					<dim>8</dim>
					<dim>16</dim>
				</port>
				<port id="1">
					<dim>1</dim>
					<dim>256</dim>
					<dim>8</dim>
					<dim>16</dim>
				</port>
			</input>
			<output>
				<port id="2">
					<dim>1</dim>
					<dim>256</dim>
					<dim>8</dim>
					<dim>16</dim>
				</port>
			</output>
		</layer>
		<layer id="62" name="ThresholdBackward94" precision="FP16" type="ReLU">
			<data negative_slope="0.0"/>
			<input>
				<port id="0">
					<dim>1</dim>
					<dim>256</dim>
					<dim>8</dim>
					<dim>16</dim>
				</port>
			</input>
			<output>
				<port id="1">
					<dim>1</dim>
					<dim>256</dim>
					<dim>8</dim>
					<dim>16</dim>
				</port>
			</output>
		</layer>
		<layer id="63" name="ConvNdBackward95" precision="FP16" type="Convolution">
			<data dilations="1,1" group="1" kernel="1,1" output="128" pads_begin="0,0" pads_end="0,0" strides="1,1"/>
			<input>
				<port id="0">
					<dim>1</dim>
					<dim>256</dim>
					<dim>8</dim>
					<dim>16</dim>
				</port>
			</input>
			<output>
				<port id="3">
					<dim>1</dim>
					<dim>128</dim>
					<dim>8</dim>
					<dim>16</dim>
				</port>
			</output>
			<blobs>
				<weights offset="785228" size="65536"/>
				<biases offset="850764" size="256"/>
			</blobs>
		</layer>
		<layer id="64" name="ThresholdBackward97" precision="FP16" type="ReLU">
			<data negative_slope="0.0"/>
			<input>
				<port id="0">
					<dim>1</dim>
					<dim>128</dim>
					<dim>8</dim>
					<dim>16</dim>
				</port>
			</input>
			<output>
				<port id="1">
					<dim>1</dim>
					<dim>128</dim>
					<dim>8</dim>
					<dim>16</dim>
				</port>
			</output>
		</layer>
		<layer id="65" name="ConvNdBackward98" precision="FP16" type="Convolution">
			<data dilations="1,1" group="1" kernel="3,3" output="128" pads_begin="1,1" pads_end="1,1" strides="1,1"/>
			<input>
				<port id="0">
					<dim>1</dim>
					<dim>128</dim>
					<dim>8</dim>
					<dim>16</dim>
				</port>
			</input>
			<output>
				<port id="3">
					<dim>1</dim>
					<dim>128</dim>
					<dim>8</dim>
					<dim>16</dim>
				</port>
			</output>
			<blobs>
				<weights offset="851020" size="294912"/>
				<biases offset="1145932" size="256"/>
			</blobs>
		</layer>
		<layer id="66" name="ThresholdBackward100" precision="FP16" type="ReLU">
			<data negative_slope="0.0"/>
			<input>
				<port id="0">
					<dim>1</dim>
					<dim>128</dim>
					<dim>8</dim>
					<dim>16</dim>
				</port>
			</input>
			<output>
				<port id="1">
					<dim>1</dim>
					<dim>128</dim>
					<dim>8</dim>
					<dim>16</dim>
				</port>
			</output>
		</layer>
		<layer id="67" name="ConvNdBackward101" precision="FP16" type="Convolution">
			<data dilations="1,1" group="1" kernel="1,1" output="512" pads_begin="0,0" pads_end="0,0" strides="1,1"/>
			<input>
				<port id="0">
					<dim>1</dim>
					<dim>128</dim>
					<dim>8</dim>
					<dim>16</dim>
				</port>
			</input>
			<output>
				<port id="3">
					<dim>1</dim>
					<dim>512</dim>
					<dim>8</dim>
					<dim>16</dim>
				</port>
			</output>
			<blobs>
				<weights offset="1146188" size="131072"/>
				<biases offset="1277260" size="1024"/>
			</blobs>
		</layer>
		<layer id="68" name="ConvNdBackward104" precision="FP16" type="Convolution">
			<data dilations="1,1" group="1" kernel="1,1" output="512" pads_begin="0,0" pads_end="0,0" strides="1,1"/>
			<input>
				<port id="0">
					<dim>1</dim>
					<dim>256</dim>
					<dim>8</dim>
					<dim>16</dim>
				</port>
			</input>
			<output>
				<port id="3">
					<dim>1</dim>
					<dim>512</dim>
					<dim>8</dim>
					<dim>16</dim>
				</port>
			</output>
			<blobs>
				<weights offset="1278284" size="262144"/>
				<biases offset="1540428" size="1024"/>
			</blobs>
		</layer>
		<layer id="69" name="AddBackward1106" precision="FP16" type="Eltwise">
			<data coeff="" operation="sum"/>
			<input>
				<port id="0">
					<dim>1</dim>
					<dim>512</dim>
					<dim>8</dim>
					<dim>16</dim>
				</port>
				<port id="1">
					<dim>1</dim>
					<dim>512</dim>
					<dim>8</dim>
					<dim>16</dim>
				</port>
			</input>
			<output>
				<port id="2">
					<dim>1</dim>
					<dim>512</dim>
					<dim>8</dim>
					<dim>16</dim>
				</port>
			</output>
		</layer>
		<layer id="70" name="ThresholdBackward107" precision="FP16" type="ReLU">
			<data negative_slope="0.0"/>
			<input>
				<port id="0">
					<dim>1</dim>
					<dim>512</dim>
					<dim>8</dim>
					<dim>16</dim>
				</port>
			</input>
			<output>
				<port id="1">
					<dim>1</dim>
					<dim>512</dim>
					<dim>8</dim>
					<dim>16</dim>
				</port>
			</output>
		</layer>
		<layer id="71" name="ConvNdBackward108" precision="FP16" type="Convolution">
			<data dilations="1,1" group="1" kernel="1,1" output="128" pads_begin="0,0" pads_end="0,0" strides="1,1"/>
			<input>
				<port id="0">
					<dim>1</dim>
					<dim>512</dim>
					<dim>8</dim>
					<dim>16</dim>
				</port>
			</input>
			<output>
				<port id="3">
					<dim>1</dim>
					<dim>128</dim>
					<dim>8</dim>
					<dim>16</dim>
				</port>
			</output>
			<blobs>
				<weights offset="1541452" size="131072"/>
				<biases offset="1672524" size="256"/>
			</blobs>
		</layer>
		<layer id="72" name="ThresholdBackward110" precision="FP16" type="ReLU">
			<data negative_slope="0.0"/>
			<input>
				<port id="0">
					<dim>1</dim>
					<dim>128</dim>
					<dim>8</dim>
					<dim>16</dim>
				</port>
			</input>
			<output>
				<port id="1">
					<dim>1</dim>
					<dim>128</dim>
					<dim>8</dim>
					<dim>16</dim>
				</port>
			</output>
		</layer>
		<layer id="73" name="ConvNdBackward111" precision="FP16" type="Convolution">
			<data dilations="2,2" group="1" kernel="3,3" output="128" pads_begin="2,2" pads_end="2,2" strides="1,1"/>
			<input>
				<port id="0">
					<dim>1</dim>
					<dim>128</dim>
					<dim>8</dim>
					<dim>16</dim>
				</port>
			</input>
			<output>
				<port id="3">
					<dim>1</dim>
					<dim>128</dim>
					<dim>8</dim>
					<dim>16</dim>
				</port>
			</output>
			<blobs>
				<weights offset="1672780" size="294912"/>
				<biases offset="1967692" size="256"/>
			</blobs>
		</layer>
		<layer id="74" name="ThresholdBackward113" precision="FP16" type="ReLU">
			<data negative_slope="0.0"/>
			<input>
				<port id="0">
					<dim>1</dim>
					<dim>128</dim>
					<dim>8</dim>
					<dim>16</dim>
				</port>
			</input>
			<output>
				<port id="1">
					<dim>1</dim>
					<dim>128</dim>
					<dim>8</dim>
					<dim>16</dim>
				</port>
			</output>
		</layer>
		<layer id="75" name="ConvNdBackward114" precision="FP16" type="Convolution">
			<data dilations="1,1" group="1" kernel="1,1" output="512" pads_begin="0,0" pads_end="0,0" strides="1,1"/>
			<input>
				<port id="0">
					<dim>1</dim>
					<dim>128</dim>
					<dim>8</dim>
					<dim>16</dim>
				</port>
			</input>
			<output>
				<port id="3">
					<dim>1</dim>
					<dim>512</dim>
					<dim>8</dim>
					<dim>16</dim>
				</port>
			</output>
			<blobs>
				<weights offset="1967948" size="131072"/>
				<biases offset="2099020" size="1024"/>
			</blobs>
		</layer>
		<layer id="76" name="AddBackward1117" precision="FP16" type="Eltwise">
			<data coeff="" operation="sum"/>
			<input>
				<port id="0">
					<dim>1</dim>
					<dim>512</dim>
					<dim>8</dim>
					<dim>16</dim>
				</port>
				<port id="1">
					<dim>1</dim>
					<dim>512</dim>
					<dim>8</dim>
					<dim>16</dim>
				</port>
			</input>
			<output>
				<port id="2">
					<dim>1</dim>
					<dim>512</dim>
					<dim>8</dim>
					<dim>16</dim>
				</port>
			</output>
		</layer>
		<layer id="77" name="ThresholdBackward118" precision="FP16" type="ReLU">
			<data negative_slope="0.0"/>
			<input>
				<port id="0">
					<dim>1</dim>
					<dim>512</dim>
					<dim>8</dim>
					<dim>16</dim>
				</port>
			</input>
			<output>
				<port id="1">
					<dim>1</dim>
					<dim>512</dim>
					<dim>8</dim>
					<dim>16</dim>
				</port>
			</output>
		</layer>
		<layer id="78" name="ConvNdBackward119" precision="FP16" type="Convolution">
			<data dilations="1,1" group="1" kernel="1,1" output="128" pads_begin="0,0" pads_end="0,0" strides="1,1"/>
			<input>
				<port id="0">
					<dim>1</dim>
					<dim>512</dim>
					<dim>8</dim>
					<dim>16</dim>
				</port>
			</input>
			<output>
				<port id="3">
					<dim>1</dim>
					<dim>128</dim>
					<dim>8</dim>
					<dim>16</dim>
				</port>
			</output>
			<blobs>
				<weights offset="2100044" size="131072"/>
				<biases offset="2231116" size="256"/>
			</blobs>
		</layer>
		<layer id="79" name="ThresholdBackward121" precision="FP16" type="ReLU">
			<data negative_slope="0.0"/>
			<input>
				<port id="0">
					<dim>1</dim>
					<dim>128</dim>
					<dim>8</dim>
					<dim>16</dim>
				</port>
			</input>
			<output>
				<port id="1">
					<dim>1</dim>
					<dim>128</dim>
					<dim>8</dim>
					<dim>16</dim>
				</port>
			</output>
		</layer>
		<layer id="80" name="ConvNdBackward122" precision="FP16" type="Convolution">
			<data dilations="2,2" group="1" kernel="3,3" output="128" pads_begin="2,2" pads_end="2,2" strides="1,1"/>
			<input>
				<port id="0">
					<dim>1</dim>
					<dim>128</dim>
					<dim>8</dim>
					<dim>16</dim>
				</port>
			</input>
			<output>
				<port id="3">
					<dim>1</dim>
					<dim>128</dim>
					<dim>8</dim>
					<dim>16</dim>
				</port>
			</output>
			<blobs>
				<weights offset="2231372" size="294912"/>
				<biases offset="2526284" size="256"/>
			</blobs>
		</layer>
		<layer id="81" name="ThresholdBackward124" precision="FP16" type="ReLU">
			<data negative_slope="0.0"/>
			<input>
				<port id="0">
					<dim>1</dim>
					<dim>128</dim>
					<dim>8</dim>
					<dim>16</dim>
				</port>
			</input>
			<output>
				<port id="1">
					<dim>1</dim>
					<dim>128</dim>
					<dim>8</dim>
					<dim>16</dim>
				</port>
			</output>
		</layer>
		<layer id="82" name="ConvNdBackward125" precision="FP16" type="Convolution">
			<data dilations="1,1" group="1" kernel="1,1" output="512" pads_begin="0,0" pads_end="0,0" strides="1,1"/>
			<input>
				<port id="0">
					<dim>1</dim>
					<dim>128</dim>
					<dim>8</dim>
					<dim>16</dim>
				</port>
			</input>
			<output>
				<port id="3">
					<dim>1</dim>
					<dim>512</dim>
					<dim>8</dim>
					<dim>16</dim>
				</port>
			</output>
			<blobs>
				<weights offset="2526540" size="131072"/>
				<biases offset="2657612" size="1024"/>
			</blobs>
		</layer>
		<layer id="83" name="AddBackward1128" precision="FP16" type="Eltwise">
			<data coeff="" operation="sum"/>
			<input>
				<port id="0">
					<dim>1</dim>
					<dim>512</dim>
					<dim>8</dim>
					<dim>16</dim>
				</port>
				<port id="1">
					<dim>1</dim>
					<dim>512</dim>
					<dim>8</dim>
					<dim>16</dim>
				</port>
			</input>
			<output>
				<port id="2">
					<dim>1</dim>
					<dim>512</dim>
					<dim>8</dim>
					<dim>16</dim>
				</port>
			</output>
		</layer>
		<layer id="84" name="ThresholdBackward129" precision="FP16" type="ReLU">
			<data negative_slope="0.0"/>
			<input>
				<port id="0">
					<dim>1</dim>
					<dim>512</dim>
					<dim>8</dim>
					<dim>16</dim>
				</port>
			</input>
			<output>
				<port id="1">
					<dim>1</dim>
					<dim>512</dim>
					<dim>8</dim>
					<dim>16</dim>
				</port>
			</output>
		</layer>
		<layer id="85" name="ConvNdBackward130" precision="FP16" type="Convolution">
			<data dilations="1,1" group="1" kernel="1,1" output="128" pads_begin="0,0" pads_end="0,0" strides="1,1"/>
			<input>
				<port id="0">
					<dim>1</dim>
					<dim>512</dim>
					<dim>8</dim>
					<dim>16</dim>
				</port>
			</input>
			<output>
				<port id="3">
					<dim>1</dim>
					<dim>128</dim>
					<dim>8</dim>
					<dim>16</dim>
				</port>
			</output>
			<blobs>
				<weights offset="2658636" size="131072"/>
				<biases offset="2789708" size="256"/>
			</blobs>
		</layer>
		<layer id="86" name="ThresholdBackward132" precision="FP16" type="ReLU">
			<data negative_slope="0.0"/>
			<input>
				<port id="0">
					<dim>1</dim>
					<dim>128</dim>
					<dim>8</dim>
					<dim>16</dim>
				</port>
			</input>
			<output>
				<port id="1">
					<dim>1</dim>
					<dim>128</dim>
					<dim>8</dim>
					<dim>16</dim>
				</port>
			</output>
		</layer>
		<layer id="87" name="ConvNdBackward133" precision="FP16" type="Convolution">
			<data dilations="2,2" group="1" kernel="3,3" output="128" pads_begin="2,2" pads_end="2,2" strides="1,1"/>
			<input>
				<port id="0">
					<dim>1</dim>
					<dim>128</dim>
					<dim>8</dim>
					<dim>16</dim>
				</port>
			</input>
			<output>
				<port id="3">
					<dim>1</dim>
					<dim>128</dim>
					<dim>8</dim>
					<dim>16</dim>
				</port>
			</output>
			<blobs>
				<weights offset="2789964" size="294912"/>
				<biases offset="3084876" size="256"/>
			</blobs>
		</layer>
		<layer id="88" name="ThresholdBackward135" precision="FP16" type="ReLU">
			<data negative_slope="0.0"/>
			<input>
				<port id="0">
					<dim>1</dim>
					<dim>128</dim>
					<dim>8</dim>
					<dim>16</dim>
				</port>
			</input>
			<output>
				<port id="1">
					<dim>1</dim>
					<dim>128</dim>
					<dim>8</dim>
					<dim>16</dim>
				</port>
			</output>
		</layer>
		<layer id="89" name="ConvNdBackward136" precision="FP16" type="Convolution">
			<data dilations="1,1" group="1" kernel="1,1" output="512" pads_begin="0,0" pads_end="0,0" strides="1,1"/>
			<input>
				<port id="0">
					<dim>1</dim>
					<dim>128</dim>
					<dim>8</dim>
					<dim>16</dim>
				</port>
			</input>
			<output>
				<port id="3">
					<dim>1</dim>
					<dim>512</dim>
					<dim>8</dim>
					<dim>16</dim>
				</port>
			</output>
			<blobs>
				<weights offset="3085132" size="131072"/>
				<biases offset="3216204" size="1024"/>
			</blobs>
		</layer>
		<layer id="90" name="AddBackward1139" precision="FP16" type="Eltwise">
			<data coeff="" operation="sum"/>
			<input>
				<port id="0">
					<dim>1</dim>
					<dim>512</dim>
					<dim>8</dim>
					<dim>16</dim>
				</port>
				<port id="1">
					<dim>1</dim>
					<dim>512</dim>
					<dim>8</dim>
					<dim>16</dim>
				</port>
			</input>
			<output>
				<port id="2">
					<dim>1</dim>
					<dim>512</dim>
					<dim>8</dim>
					<dim>16</dim>
				</port>
			</output>
		</layer>
		<layer id="91" name="ThresholdBackward140" precision="FP16" type="ReLU">
			<data negative_slope="0.0"/>
			<input>
				<port id="0">
					<dim>1</dim>
					<dim>512</dim>
					<dim>8</dim>
					<dim>16</dim>
				</port>
			</input>
			<output>
				<port id="1">
					<dim>1</dim>
					<dim>512</dim>
					<dim>8</dim>
					<dim>16</dim>
				</port>
			</output>
		</layer>
		<layer id="92" name="ConvNdBackward141" precision="FP16" type="Convolution">
			<data dilations="1,1" group="1" kernel="1,1" output="128" pads_begin="0,0" pads_end="0,0" strides="1,1"/>
			<input>
				<port id="0">
					<dim>1</dim>
					<dim>512</dim>
					<dim>8</dim>
					<dim>16</dim>
				</port>
			</input>
			<output>
				<port id="3">
					<dim>1</dim>
					<dim>128</dim>
					<dim>8</dim>
					<dim>16</dim>
				</port>
			</output>
			<blobs>
				<weights offset="3217228" size="131072"/>
				<biases offset="3348300" size="256"/>
			</blobs>
		</layer>
		<layer id="93" name="ThresholdBackward143" precision="FP16" type="ReLU">
			<data negative_slope="0.0"/>
			<input>
				<port id="0">
					<dim>1</dim>
					<dim>128</dim>
					<dim>8</dim>
					<dim>16</dim>
				</port>
			</input>
			<output>
				<port id="1">
					<dim>1</dim>
					<dim>128</dim>
					<dim>8</dim>
					<dim>16</dim>
				</port>
			</output>
		</layer>
		<layer id="94" name="ConvNdBackward144" precision="FP16" type="Convolution">
			<data dilations="2,2" group="1" kernel="3,3" output="128" pads_begin="2,2" pads_end="2,2" strides="1,1"/>
			<input>
				<port id="0">
					<dim>1</dim>
					<dim>128</dim>
					<dim>8</dim>
					<dim>16</dim>
				</port>
			</input>
			<output>
				<port id="3">
					<dim>1</dim>
					<dim>128</dim>
					<dim>8</dim>
					<dim>16</dim>
				</port>
			</output>
			<blobs>
				<weights offset="3348556" size="294912"/>
				<biases offset="3643468" size="256"/>
			</blobs>
		</layer>
		<layer id="95" name="ThresholdBackward146" precision="FP16" type="ReLU">
			<data negative_slope="0.0"/>
			<input>
				<port id="0">
					<dim>1</dim>
					<dim>128</dim>
					<dim>8</dim>
					<dim>16</dim>
				</port>
			</input>
			<output>
				<port id="1">
					<dim>1</dim>
					<dim>128</dim>
					<dim>8</dim>
					<dim>16</dim>
				</port>
			</output>
		</layer>
		<layer id="96" name="ConvNdBackward147" precision="FP16" type="Convolution">
			<data dilations="1,1" group="1" kernel="1,1" output="512" pads_begin="0,0" pads_end="0,0" strides="1,1"/>
			<input>
				<port id="0">
					<dim>1</dim>
					<dim>128</dim>
					<dim>8</dim>
					<dim>16</dim>
				</port>
			</input>
			<output>
				<port id="3">
					<dim>1</dim>
					<dim>512</dim>
					<dim>8</dim>
					<dim>16</dim>
				</port>
			</output>
			<blobs>
				<weights offset="3643724" size="131072"/>
				<biases offset="3774796" size="1024"/>
			</blobs>
		</layer>
		<layer id="97" name="AddBackward1150" precision="FP16" type="Eltwise">
			<data coeff="" operation="sum"/>
			<input>
				<port id="0">
					<dim>1</dim>
					<dim>512</dim>
					<dim>8</dim>
					<dim>16</dim>
				</port>
				<port id="1">
					<dim>1</dim>
					<dim>512</dim>
					<dim>8</dim>
					<dim>16</dim>
				</port>
			</input>
			<output>
				<port id="2">
					<dim>1</dim>
					<dim>512</dim>
					<dim>8</dim>
					<dim>16</dim>
				</port>
			</output>
		</layer>
		<layer id="98" name="ThresholdBackward151" precision="FP16" type="ReLU">
			<data negative_slope="0.0"/>
			<input>
				<port id="0">
					<dim>1</dim>
					<dim>512</dim>
					<dim>8</dim>
					<dim>16</dim>
				</port>
			</input>
			<output>
				<port id="1">
					<dim>1</dim>
					<dim>512</dim>
					<dim>8</dim>
					<dim>16</dim>
				</port>
			</output>
		</layer>
		<layer id="99" name="ConvNdBackward152" precision="FP16" type="Convolution">
			<data dilations="1,1" group="1" kernel="1,1" output="128" pads_begin="0,0" pads_end="0,0" strides="1,1"/>
			<input>
				<port id="0">
					<dim>1</dim>
					<dim>512</dim>
					<dim>8</dim>
					<dim>16</dim>
				</port>
			</input>
			<output>
				<port id="3">
					<dim>1</dim>
					<dim>128</dim>
					<dim>8</dim>
					<dim>16</dim>
				</port>
			</output>
			<blobs>
				<weights offset="3775820" size="131072"/>
				<biases offset="3906892" size="256"/>
			</blobs>
		</layer>
		<layer id="100" name="ThresholdBackward154" precision="FP16" type="ReLU">
			<data negative_slope="0.0"/>
			<input>
				<port id="0">
					<dim>1</dim>
					<dim>128</dim>
					<dim>8</dim>
					<dim>16</dim>
				</port>
			</input>
			<output>
				<port id="1">
					<dim>1</dim>
					<dim>128</dim>
					<dim>8</dim>
					<dim>16</dim>
				</port>
			</output>
		</layer>
		<layer id="101" name="ConvNdBackward155" precision="FP16" type="Convolution">
			<data dilations="2,2" group="1" kernel="3,3" output="128" pads_begin="2,2" pads_end="2,2" strides="1,1"/>
			<input>
				<port id="0">
					<dim>1</dim>
					<dim>128</dim>
					<dim>8</dim>
					<dim>16</dim>
				</port>
			</input>
			<output>
				<port id="3">
					<dim>1</dim>
					<dim>128</dim>
					<dim>8</dim>
					<dim>16</dim>
				</port>
			</output>
			<blobs>
				<weights offset="3907148" size="294912"/>
				<biases offset="4202060" size="256"/>
			</blobs>
		</layer>
		<layer id="102" name="ThresholdBackward157" precision="FP16" type="ReLU">
			<data negative_slope="0.0"/>
			<input>
				<port id="0">
					<dim>1</dim>
					<dim>128</dim>
					<dim>8</dim>
					<dim>16</dim>
				</port>
			</input>
			<output>
				<port id="1">
					<dim>1</dim>
					<dim>128</dim>
					<dim>8</dim>
					<dim>16</dim>
				</port>
			</output>
		</layer>
		<layer id="103" name="ConvNdBackward158" precision="FP16" type="Convolution">
			<data dilations="1,1" group="1" kernel="1,1" output="512" pads_begin="0,0" pads_end="0,0" strides="1,1"/>
			<input>
				<port id="0">
					<dim>1</dim>
					<dim>128</dim>
					<dim>8</dim>
					<dim>16</dim>
				</port>
			</input>
			<output>
				<port id="3">
					<dim>1</dim>
					<dim>512</dim>
					<dim>8</dim>
					<dim>16</dim>
				</port>
			</output>
			<blobs>
				<weights offset="4202316" size="131072"/>
				<biases offset="4333388" size="1024"/>
			</blobs>
		</layer>
		<layer id="104" name="AddBackward1161" precision="FP16" type="Eltwise">
			<data coeff="" operation="sum"/>
			<input>
				<port id="0">
					<dim>1</dim>
					<dim>512</dim>
					<dim>8</dim>
					<dim>16</dim>
				</port>
				<port id="1">
					<dim>1</dim>
					<dim>512</dim>
					<dim>8</dim>
					<dim>16</dim>
				</port>
			</input>
			<output>
				<port id="2">
					<dim>1</dim>
					<dim>512</dim>
					<dim>8</dim>
					<dim>16</dim>
				</port>
			</output>
		</layer>
		<layer id="105" name="ThresholdBackward162" precision="FP16" type="ReLU">
			<data negative_slope="0.0"/>
			<input>
				<port id="0">
					<dim>1</dim>
					<dim>512</dim>
					<dim>8</dim>
					<dim>16</dim>
				</port>
			</input>
			<output>
				<port id="1">
					<dim>1</dim>
					<dim>512</dim>
					<dim>8</dim>
					<dim>16</dim>
				</port>
			</output>
		</layer>
		<layer id="106" name="ConvNdBackward163" precision="FP16" type="Convolution">
			<data dilations="1,1" group="1" kernel="1,1" output="256" pads_begin="0,0" pads_end="0,0" strides="1,1"/>
			<input>
				<port id="0">
					<dim>1</dim>
					<dim>512</dim>
					<dim>8</dim>
					<dim>16</dim>
				</port>
			</input>
			<output>
				<port id="3">
					<dim>1</dim>
					<dim>256</dim>
					<dim>8</dim>
					<dim>16</dim>
				</port>
			</output>
			<blobs>
				<weights offset="4334412" size="262144"/>
				<biases offset="4596556" size="512"/>
			</blobs>
		</layer>
		<layer id="107" name="ThresholdBackward165" precision="FP16" type="ReLU">
			<data negative_slope="0.0"/>
			<input>
				<port id="0">
					<dim>1</dim>
					<dim>256</dim>
					<dim>8</dim>
					<dim>16</dim>
				</port>
			</input>
			<output>
				<port id="1">
					<dim>1</dim>
					<dim>256</dim>
					<dim>8</dim>
					<dim>16</dim>
				</port>
			</output>
		</layer>
		<layer id="108" name="ConvNdBackward166" precision="FP16" type="Convolution">
			<data dilations="1,1" group="1" kernel="3,3" output="256" pads_begin="1,1" pads_end="1,1" strides="1,1"/>
			<input>
				<port id="0">
					<dim>1</dim>
					<dim>256</dim>
					<dim>8</dim>
					<dim>16</dim>
				</port>
			</input>
			<output>
				<port id="3">
					<dim>1</dim>
					<dim>256</dim>
					<dim>8</dim>
					<dim>16</dim>
				</port>
			</output>
			<blobs>
				<weights offset="4597068" size="1179648"/>
				<biases offset="5776716" size="512"/>
			</blobs>
		</layer>
		<layer id="109" name="ThresholdBackward168" precision="FP16" type="ReLU">
			<data negative_slope="0.0"/>
			<input>
				<port id="0">
					<dim>1</dim>
					<dim>256</dim>
					<dim>8</dim>
					<dim>16</dim>
				</port>
			</input>
			<output>
				<port id="1">
					<dim>1</dim>
					<dim>256</dim>
					<dim>8</dim>
					<dim>16</dim>
				</port>
			</output>
		</layer>
		<layer id="110" name="ConvNdBackward169" precision="FP16" type="Convolution">
			<data dilations="1,1" group="1" kernel="1,1" output="1024" pads_begin="0,0" pads_end="0,0" strides="1,1"/>
			<input>
				<port id="0">
					<dim>1</dim>
					<dim>256</dim>
					<dim>8</dim>
					<dim>16</dim>
				</port>
			</input>
			<output>
				<port id="3">
					<dim>1</dim>
					<dim>1024</dim>
					<dim>8</dim>
					<dim>16</dim>
				</port>
			</output>
			<blobs>
				<weights offset="5777228" size="524288"/>
				<biases offset="6301516" size="2048"/>
			</blobs>
		</layer>
		<layer id="111" name="ConvNdBackward172" precision="FP16" type="Convolution">
			<data dilations="1,1" group="1" kernel="1,1" output="1024" pads_begin="0,0" pads_end="0,0" strides="1,1"/>
			<input>
				<port id="0">
					<dim>1</dim>
					<dim>512</dim>
					<dim>8</dim>
					<dim>16</dim>
				</port>
			</input>
			<output>
				<port id="3">
					<dim>1</dim>
					<dim>1024</dim>
					<dim>8</dim>
					<dim>16</dim>
				</port>
			</output>
			<blobs>
				<weights offset="6303564" size="1048576"/>
				<biases offset="7352140" size="2048"/>
			</blobs>
		</layer>
		<layer id="112" name="AddBackward1174" precision="FP16" type="Eltwise">
			<data coeff="" operation="sum"/>
			<input>
				<port id="0">
					<dim>1</dim>
					<dim>1024</dim>
					<dim>8</dim>
					<dim>16</dim>
				</port>
				<port id="1">
					<dim>1</dim>
					<dim>1024</dim>
					<dim>8</dim>
					<dim>16</dim>
				</port>
			</input>
			<output>
				<port id="2">
					<dim>1</dim>
					<dim>1024</dim>
					<dim>8</dim>
					<dim>16</dim>
				</port>
			</output>
		</layer>
		<layer id="113" name="ThresholdBackward175" precision="FP16" type="ReLU">
			<data negative_slope="0.0"/>
			<input>
				<port id="0">
					<dim>1</dim>
					<dim>1024</dim>
					<dim>8</dim>
					<dim>16</dim>
				</port>
			</input>
			<output>
				<port id="1">
					<dim>1</dim>
					<dim>1024</dim>
					<dim>8</dim>
					<dim>16</dim>
				</port>
			</output>
		</layer>
		<layer id="114" name="ConvNdBackward176" precision="FP16" type="Convolution">
			<data dilations="1,1" group="1" kernel="1,1" output="256" pads_begin="0,0" pads_end="0,0" strides="1,1"/>
			<input>
				<port id="0">
					<dim>1</dim>
					<dim>1024</dim>
					<dim>8</dim>
					<dim>16</dim>
				</port>
			</input>
			<output>
				<port id="3">
					<dim>1</dim>
					<dim>256</dim>
					<dim>8</dim>
					<dim>16</dim>
				</port>
			</output>
			<blobs>
				<weights offset="7354188" size="524288"/>
				<biases offset="7878476" size="512"/>
			</blobs>
		</layer>
		<layer id="115" name="ThresholdBackward178" precision="FP16" type="ReLU">
			<data negative_slope="0.0"/>
			<input>
				<port id="0">
					<dim>1</dim>
					<dim>256</dim>
					<dim>8</dim>
					<dim>16</dim>
				</port>
			</input>
			<output>
				<port id="1">
					<dim>1</dim>
					<dim>256</dim>
					<dim>8</dim>
					<dim>16</dim>
				</port>
			</output>
		</layer>
		<layer id="116" name="ConvNdBackward179" precision="FP16" type="Convolution">
			<data dilations="4,4" group="1" kernel="3,3" output="256" pads_begin="4,4" pads_end="4,4" strides="1,1"/>
			<input>
				<port id="0">
					<dim>1</dim>
					<dim>256</dim>
					<dim>8</dim>
					<dim>16</dim>
				</port>
			</input>
			<output>
				<port id="3">
					<dim>1</dim>
					<dim>256</dim>
					<dim>8</dim>
					<dim>16</dim>
				</port>
			</output>
			<blobs>
				<weights offset="7878988" size="1179648"/>
				<biases offset="9058636" size="512"/>
			</blobs>
		</layer>
		<layer id="117" name="ThresholdBackward181" precision="FP16" type="ReLU">
			<data negative_slope="0.0"/>
			<input>
				<port id="0">
					<dim>1</dim>
					<dim>256</dim>
					<dim>8</dim>
					<dim>16</dim>
				</port>
			</input>
			<output>
				<port id="1">
					<dim>1</dim>
					<dim>256</dim>
					<dim>8</dim>
					<dim>16</dim>
				</port>
			</output>
		</layer>
		<layer id="118" name="ConvNdBackward182" precision="FP16" type="Convolution">
			<data dilations="1,1" group="1" kernel="1,1" output="1024" pads_begin="0,0" pads_end="0,0" strides="1,1"/>
			<input>
				<port id="0">
					<dim>1</dim>
					<dim>256</dim>
					<dim>8</dim>
					<dim>16</dim>
				</port>
			</input>
			<output>
				<port id="3">
					<dim>1</dim>
					<dim>1024</dim>
					<dim>8</dim>
					<dim>16</dim>
				</port>
			</output>
			<blobs>
				<weights offset="9059148" size="524288"/>
				<biases offset="9583436" size="2048"/>
			</blobs>
		</layer>
		<layer id="119" name="AddBackward1185" precision="FP16" type="Eltwise">
			<data coeff="" operation="sum"/>
			<input>
				<port id="0">
					<dim>1</dim>
					<dim>1024</dim>
					<dim>8</dim>
					<dim>16</dim>
				</port>
				<port id="1">
					<dim>1</dim>
					<dim>1024</dim>
					<dim>8</dim>
					<dim>16</dim>
				</port>
			</input>
			<output>
				<port id="2">
					<dim>1</dim>
					<dim>1024</dim>
					<dim>8</dim>
					<dim>16</dim>
				</port>
			</output>
		</layer>
		<layer id="120" name="ThresholdBackward186" precision="FP16" type="ReLU">
			<data negative_slope="0.0"/>
			<input>
				<port id="0">
					<dim>1</dim>
					<dim>1024</dim>
					<dim>8</dim>
					<dim>16</dim>
				</port>
			</input>
			<output>
				<port id="1">
					<dim>1</dim>
					<dim>1024</dim>
					<dim>8</dim>
					<dim>16</dim>
				</port>
			</output>
		</layer>
		<layer id="121" name="ConvNdBackward187" precision="FP16" type="Convolution">
			<data dilations="1,1" group="1" kernel="1,1" output="256" pads_begin="0,0" pads_end="0,0" strides="1,1"/>
			<input>
				<port id="0">
					<dim>1</dim>
					<dim>1024</dim>
					<dim>8</dim>
					<dim>16</dim>
				</port>
			</input>
			<output>
				<port id="3">
					<dim>1</dim>
					<dim>256</dim>
					<dim>8</dim>
					<dim>16</dim>
				</port>
			</output>
			<blobs>
				<weights offset="9585484" size="524288"/>
				<biases offset="10109772" size="512"/>
			</blobs>
		</layer>
		<layer id="122" name="ThresholdBackward189" precision="FP16" type="ReLU">
			<data negative_slope="0.0"/>
			<input>
				<port id="0">
					<dim>1</dim>
					<dim>256</dim>
					<dim>8</dim>
					<dim>16</dim>
				</port>
			</input>
			<output>
				<port id="1">
					<dim>1</dim>
					<dim>256</dim>
					<dim>8</dim>
					<dim>16</dim>
				</port>
			</output>
		</layer>
		<layer id="123" name="ConvNdBackward190" precision="FP16" type="Convolution">
			<data dilations="4,4" group="1" kernel="3,3" output="256" pads_begin="4,4" pads_end="4,4" strides="1,1"/>
			<input>
				<port id="0">
					<dim>1</dim>
					<dim>256</dim>
					<dim>8</dim>
					<dim>16</dim>
				</port>
			</input>
			<output>
				<port id="3">
					<dim>1</dim>
					<dim>256</dim>
					<dim>8</dim>
					<dim>16</dim>
				</port>
			</output>
			<blobs>
				<weights offset="10110284" size="1179648"/>
				<biases offset="11289932" size="512"/>
			</blobs>
		</layer>
		<layer id="124" name="ThresholdBackward192" precision="FP16" type="ReLU">
			<data negative_slope="0.0"/>
			<input>
				<port id="0">
					<dim>1</dim>
					<dim>256</dim>
					<dim>8</dim>
					<dim>16</dim>
				</port>
			</input>
			<output>
				<port id="1">
					<dim>1</dim>
					<dim>256</dim>
					<dim>8</dim>
					<dim>16</dim>
				</port>
			</output>
		</layer>
		<layer id="125" name="ConvNdBackward193" precision="FP16" type="Convolution">
			<data dilations="1,1" group="1" kernel="1,1" output="1024" pads_begin="0,0" pads_end="0,0" strides="1,1"/>
			<input>
				<port id="0">
					<dim>1</dim>
					<dim>256</dim>
					<dim>8</dim>
					<dim>16</dim>
				</port>
			</input>
			<output>
				<port id="3">
					<dim>1</dim>
					<dim>1024</dim>
					<dim>8</dim>
					<dim>16</dim>
				</port>
			</output>
			<blobs>
				<weights offset="11290444" size="524288"/>
				<biases offset="11814732" size="2048"/>
			</blobs>
		</layer>
		<layer id="126" name="AddBackward1196" precision="FP16" type="Eltwise">
			<data coeff="" operation="sum"/>
			<input>
				<port id="0">
					<dim>1</dim>
					<dim>1024</dim>
					<dim>8</dim>
					<dim>16</dim>
				</port>
				<port id="1">
					<dim>1</dim>
					<dim>1024</dim>
					<dim>8</dim>
					<dim>16</dim>
				</port>
			</input>
			<output>
				<port id="2">
					<dim>1</dim>
					<dim>1024</dim>
					<dim>8</dim>
					<dim>16</dim>
				</port>
			</output>
		</layer>
		<layer id="127" name="ThresholdBackward197" precision="FP16" type="ReLU">
			<data negative_slope="0.0"/>
			<input>
				<port id="0">
					<dim>1</dim>
					<dim>1024</dim>
					<dim>8</dim>
					<dim>16</dim>
				</port>
			</input>
			<output>
				<port id="1">
					<dim>1</dim>
					<dim>1024</dim>
					<dim>8</dim>
					<dim>16</dim>
				</port>
			</output>
		</layer>
		<layer id="128" name="AdaptiveAvgPool2dBackward199" precision="FP16" type="Pooling">
			<data exclude-pad="false" kernel="32,64" pads_begin="0,0" pads_end="0,0" pool-method="avg" rounding_type="ceil" strides="32,64"/>
			<input>
				<port id="0">
					<dim>1</dim>
					<dim>1024</dim>
					<dim>8</dim>
					<dim>16</dim>
				</port>
			</input>
			<output>
				<port id="1">
					<dim>1</dim>
					<dim>1024</dim>
					<dim>1</dim>
					<dim>1</dim>
				</port>
			</output>
		</layer>
		<layer id="129" name="UpsamplingBilinear2dBackward200" precision="FP16" type="Interp">
			<data align_corners="1" height="32" pad_beg="0" pad_end="0" shrink_factor="1" width="64" zoom_factor="1"/>
			<input>
				<port id="0">
					<dim>1</dim>
					<dim>1024</dim>
					<dim>1</dim>
					<dim>1</dim>
				</port>
			</input>
			<output>
				<port id="1">
					<dim>1</dim>
					<dim>1024</dim>
					<dim>8</dim>
					<dim>16</dim>
				</port>
			</output>
		</layer>
		<layer id="130" name="AddBackward1201" precision="FP16" type="Eltwise">
			<data coeff="" operation="sum"/>
			<input>
				<port id="0">
					<dim>1</dim>
					<dim>1024</dim>
					<dim>8</dim>
					<dim>16</dim>
				</port>
				<port id="1">
					<dim>1</dim>
					<dim>1024</dim>
					<dim>8</dim>
					<dim>16</dim>
				</port>
			</input>
			<output>
				<port id="2">
					<dim>1</dim>
					<dim>1024</dim>
					<dim>8</dim>
					<dim>16</dim>
				</port>
			</output>
		</layer>
		<layer id="131" name="AdaptiveAvgPool2dBackward203" precision="FP16" type="Pooling">
			<data exclude-pad="false" kernel="16,32" pads_begin="0,0" pads_end="0,0" pool-method="avg" rounding_type="ceil" strides="16,32"/>
			<input>
				<port id="0">
					<dim>1</dim>
					<dim>1024</dim>
					<dim>8</dim>
					<dim>16</dim>
				</port>
			</input>
			<output>
				<port id="1">
					<dim>1</dim>
					<dim>1024</dim>
					<dim>2</dim>
					<dim>2</dim>
				</port>
			</output>
		</layer>
		<layer id="132" name="UpsamplingBilinear2dBackward204" precision="FP16" type="Interp">
			<data align_corners="1" height="32" pad_beg="0" pad_end="0" shrink_factor="1" width="64" zoom_factor="1"/>
			<input>
				<port id="0">
					<dim>1</dim>
					<dim>1024</dim>
					<dim>2</dim>
					<dim>2</dim>
				</port>
			</input>
			<output>
				<port id="1">
					<dim>1</dim>
					<dim>1024</dim>
					<dim>8</dim>
					<dim>16</dim>
				</port>
			</output>
		</layer>
		<layer id="133" name="AddBackward1205" precision="FP16" type="Eltwise">
			<data coeff="" operation="sum"/>
			<input>
				<port id="0">
					<dim>1</dim>
					<dim>1024</dim>
					<dim>8</dim>
					<dim>16</dim>
				</port>
				<port id="1">
					<dim>1</dim>
					<dim>1024</dim>
					<dim>8</dim>
					<dim>16</dim>
				</port>
			</input>
			<output>
				<port id="2">
					<dim>1</dim>
					<dim>1024</dim>
					<dim>8</dim>
					<dim>16</dim>
				</port>
			</output>
		</layer>
		<layer id="134" name="AdaptiveAvgPool2dBackward207" precision="FP16" type="Pooling">
			<data exclude-pad="false" kernel="12,22" pads_begin="0,0" pads_end="0,0" pool-method="avg" rounding_type="ceil" strides="11,22"/>
			<input>
				<port id="0">
					<dim>1</dim>
					<dim>1024</dim>
					<dim>8</dim>
					<dim>16</dim>
				</port>
			</input>
			<output>
				<port id="1">
					<dim>1</dim>
					<dim>1024</dim>
					<dim>3</dim>
					<dim>3</dim>
				</port>
			</output>
		</layer>
		<layer id="135" name="UpsamplingBilinear2dBackward208" precision="FP16" type="Interp">
			<data align_corners="1" height="32" pad_beg="0" pad_end="0" shrink_factor="1" width="64" zoom_factor="1"/>
			<input>
				<port id="0">
					<dim>1</dim>
					<dim>1024</dim>
					<dim>3</dim>
					<dim>3</dim>
				</port>
			</input>
			<output>
				<port id="1">
					<dim>1</dim>
					<dim>1024</dim>
					<dim>8</dim>
					<dim>16</dim>
				</port>
			</output>
		</layer>
		<layer id="136" name="AddBackward1209" precision="FP16" type="Eltwise">
			<data coeff="" operation="sum"/>
			<input>
				<port id="0">
					<dim>1</dim>
					<dim>1024</dim>
					<dim>8</dim>
					<dim>16</dim>
				</port>
				<port id="1">
					<dim>1</dim>
					<dim>1024</dim>
					<dim>8</dim>
					<dim>16</dim>
				</port>
			</input>
			<output>
				<port id="2">
					<dim>1</dim>
					<dim>1024</dim>
					<dim>8</dim>
					<dim>16</dim>
				</port>
			</output>
		</layer>
		<layer id="137" name="AdaptiveAvgPool2dBackward211" precision="FP16" type="Pooling">
			<data exclude-pad="false" kernel="6,12" pads_begin="0,0" pads_end="0,0" pool-method="avg" rounding_type="ceil" strides="6,11"/>
			<input>
				<port id="0">
					<dim>1</dim>
					<dim>1024</dim>
					<dim>8</dim>
					<dim>16</dim>
				</port>
			</input>
			<output>
				<port id="1">
					<dim>1</dim>
					<dim>1024</dim>
					<dim>6</dim>
					<dim>6</dim>
				</port>
			</output>
		</layer>
		<layer id="138" name="UpsamplingBilinear2dBackward212" precision="FP16" type="Interp">
			<data align_corners="1" height="32" pad_beg="0" pad_end="0" shrink_factor="1" width="64" zoom_factor="1"/>
			<input>
				<port id="0">
					<dim>1</dim>
					<dim>1024</dim>
					<dim>6</dim>
					<dim>6</dim>
				</port>
			</input>
			<output>
				<port id="1">
					<dim>1</dim>
					<dim>1024</dim>
					<dim>8</dim>
					<dim>16</dim>
				</port>
			</output>
		</layer>
		<layer id="139" name="AddBackward1213" precision="FP16" type="Eltwise">
			<data coeff="" operation="sum"/>
			<input>
				<port id="0">
					<dim>1</dim>
					<dim>1024</dim>
					<dim>8</dim>
					<dim>16</dim>
				</port>
				<port id="1">
					<dim>1</dim>
					<dim>1024</dim>
					<dim>8</dim>
					<dim>16</dim>
				</port>
			</input>
			<output>
				<port id="2">
					<dim>1</dim>
					<dim>1024</dim>
					<dim>8</dim>
					<dim>16</dim>
				</port>
			</output>
		</layer>
		<layer id="140" name="ConvNdBackward214" precision="FP16" type="Convolution">
			<data dilations="1,1" group="1" kernel="1,1" output="256" pads_begin="0,0" pads_end="0,0" strides="1,1"/>
			<input>
				<port id="0">
					<dim>1</dim>
					<dim>1024</dim>
					<dim>8</dim>
					<dim>16</dim>
				</port>
			</input>
			<output>
				<port id="3">
					<dim>1</dim>
					<dim>256</dim>
					<dim>8</dim>
					<dim>16</dim>
				</port>
			</output>
			<blobs>
				<weights offset="11816780" size="524288"/>
				<biases offset="12341068" size="512"/>
			</blobs>
		</layer>
		<layer id="141" name="ThresholdBackward216" precision="FP16" type="ReLU">
			<data negative_slope="0.0"/>
			<input>
				<port id="0">
					<dim>1</dim>
					<dim>256</dim>
					<dim>8</dim>
					<dim>16</dim>
				</port>
			</input>
			<output>
				<port id="1">
					<dim>1</dim>
					<dim>256</dim>
					<dim>8</dim>
					<dim>16</dim>
				</port>
			</output>
		</layer>
		<layer id="142" name="UpsamplingBilinear2dBackward217" precision="FP16" type="Interp">
			<data align_corners="1" height="0" pad_beg="0" pad_end="0" shrink_factor="1" width="0" zoom_factor="2"/>
			<input>
				<port id="0">
					<dim>1</dim>
					<dim>256</dim>
					<dim>8</dim>
					<dim>16</dim>
				</port>
			</input>
			<output>
				<port id="1">
					<dim>1</dim>
					<dim>256</dim>
					<dim>16</dim>
					<dim>32</dim>
				</port>
			</output>
		</layer>
		<layer id="143" name="ConvNdBackward218" precision="FP16" type="Convolution">
			<data dilations="2,2" group="1" kernel="3,3" output="128" pads_begin="2,2" pads_end="2,2" strides="1,1"/>
			<input>
				<port id="0">
					<dim>1</dim>
					<dim>256</dim>
					<dim>16</dim>
					<dim>32</dim>
				</port>
			</input>
			<output>
				<port id="3">
					<dim>1</dim>
					<dim>128</dim>
					<dim>16</dim>
					<dim>32</dim>
				</port>
			</output>
			<blobs>
				<weights offset="12341580" size="589824"/>
				<biases offset="12931404" size="256"/>
			</blobs>
		</layer>
		<layer id="144" name="ConvNdBackward221" precision="FP16" type="Convolution">
			<data dilations="1,1" group="1" kernel="1,1" output="128" pads_begin="0,0" pads_end="0,0" strides="1,1"/>
			<input>
				<port id="0">
					<dim>1</dim>
					<dim>256</dim>
					<dim>16</dim>
					<dim>32</dim>
				</port>
			</input>
			<output>
				<port id="3">
					<dim>1</dim>
					<dim>128</dim>
					<dim>16</dim>
					<dim>32</dim>
				</port>
			</output>
			<blobs>
				<weights offset="12931660" size="65536"/>
				<biases offset="12997196" size="256"/>
			</blobs>
		</layer>
		<layer id="145" name="AddBackward1223" precision="FP16" type="Eltwise">
			<data coeff="" operation="sum"/>
			<input>
				<port id="0">
					<dim>1</dim>
					<dim>128</dim>
					<dim>16</dim>
					<dim>32</dim>
				</port>
				<port id="1">
					<dim>1</dim>
					<dim>128</dim>
					<dim>16</dim>
					<dim>32</dim>
				</port>
			</input>
			<output>
				<port id="2">
					<dim>1</dim>
					<dim>128</dim>
					<dim>16</dim>
					<dim>32</dim>
				</port>
			</output>
		</layer>
		<layer id="146" name="ThresholdBackward224" precision="FP16" type="ReLU">
			<data negative_slope="0.0"/>
			<input>
				<port id="0">
					<dim>1</dim>
					<dim>128</dim>
					<dim>16</dim>
					<dim>32</dim>
				</port>
			</input>
			<output>
				<port id="1">
					<dim>1</dim>
					<dim>128</dim>
					<dim>16</dim>
					<dim>32</dim>
				</port>
			</output>
		</layer>
		<layer id="147" name="UpsamplingBilinear2dBackward225" precision="FP16" type="Interp">
			<data align_corners="1" height="0" pad_beg="0" pad_end="0" shrink_factor="1" width="0" zoom_factor="2"/>
			<input>
				<port id="0">
					<dim>1</dim>
					<dim>128</dim>
					<dim>16</dim>
					<dim>32</dim>
				</port>
			</input>
			<output>
				<port id="1">
					<dim>1</dim>
					<dim>128</dim>
					<dim>32</dim>
					<dim>64</dim>
				</port>
			</output>
		</layer>
		<layer id="148" name="ConvNdBackward226" precision="FP16" type="Convolution">
			<data dilations="2,2" group="1" kernel="3,3" output="128" pads_begin="2,2" pads_end="2,2" strides="1,1"/>
			<input>
				<port id="0">
					<dim>1</dim>
					<dim>128</dim>
					<dim>32</dim>
					<dim>64</dim>
				</port>
			</input>
			<output>
				<port id="3">
					<dim>1</dim>
					<dim>128</dim>
					<dim>32</dim>
					<dim>64</dim>
				</port>
			</output>
			<blobs>
				<weights offset="12997452" size="294912"/>
				<biases offset="13292364" size="256"/>
			</blobs>
		</layer>
		<layer id="149" name="ConvNdBackward229" precision="FP16" type="Convolution">
			<data dilations="1,1" group="1" kernel="3,3" output="32" pads_begin="1,1" pads_end="1,1" strides="2,2"/>
			<input>
				<port id="0">
					<dim>1</dim>
					<dim>3</dim>
					<dim>256</dim>
					<dim>512</dim>
				</port>
			</input>
			<output>
				<port id="3">
					<dim>1</dim>
					<dim>32</dim>
					<dim>128</dim>
					<dim>256</dim>
				</port>
			</output>
			<blobs>
				<weights offset="13292620" size="1728"/>
				<biases offset="13294348" size="64"/>
			</blobs>
		</layer>
		<layer id="150" name="ThresholdBackward231" precision="FP16" type="ReLU">
			<data negative_slope="0.0"/>
			<input>
				<port id="0">
					<dim>1</dim>
					<dim>32</dim>
					<dim>128</dim>
					<dim>256</dim>
				</port>
			</input>
			<output>
				<port id="1">
					<dim>1</dim>
					<dim>32</dim>
					<dim>128</dim>
					<dim>256</dim>
				</port>
			</output>
		</layer>
		<layer id="151" name="ConvNdBackward232" precision="FP16" type="Convolution">
			<data dilations="1,1" group="1" kernel="3,3" output="32" pads_begin="1,1" pads_end="1,1" strides="2,2"/>
			<input>
				<port id="0">
					<dim>1</dim>
					<dim>32</dim>
					<dim>128</dim>
					<dim>256</dim>
				</port>
			</input>
			<output>
				<port id="3">
					<dim>1</dim>
					<dim>32</dim>
					<dim>64</dim>
					<dim>128</dim>
				</port>
			</output>
			<blobs>
				<weights offset="13294412" size="18432"/>
				<biases offset="13312844" size="64"/>
			</blobs>
		</layer>
		<layer id="152" name="ThresholdBackward234" precision="FP16" type="ReLU">
			<data negative_slope="0.0"/>
			<input>
				<port id="0">
					<dim>1</dim>
					<dim>32</dim>
					<dim>64</dim>
					<dim>128</dim>
				</port>
			</input>
			<output>
				<port id="1">
					<dim>1</dim>
					<dim>32</dim>
					<dim>64</dim>
					<dim>128</dim>
				</port>
			</output>
		</layer>
		<layer id="153" name="ConvNdBackward235" precision="FP16" type="Convolution">
			<data dilations="1,1" group="1" kernel="3,3" output="64" pads_begin="1,1" pads_end="1,1" strides="2,2"/>
			<input>
				<port id="0">
					<dim>1</dim>
					<dim>32</dim>
					<dim>64</dim>
					<dim>128</dim>
				</port>
			</input>
			<output>
				<port id="3">
					<dim>1</dim>
					<dim>64</dim>
					<dim>32</dim>
					<dim>64</dim>
				</port>
			</output>
			<blobs>
				<weights offset="13312908" size="36864"/>
				<biases offset="13349772" size="128"/>
			</blobs>
		</layer>
		<layer id="154" name="ThresholdBackward237" precision="FP16" type="ReLU">
			<data negative_slope="0.0"/>
			<input>
				<port id="0">
					<dim>1</dim>
					<dim>64</dim>
					<dim>32</dim>
					<dim>64</dim>
				</port>
			</input>
			<output>
				<port id="1">
					<dim>1</dim>
					<dim>64</dim>
					<dim>32</dim>
					<dim>64</dim>
				</port>
			</output>
		</layer>
		<layer id="155" name="ConvNdBackward238" precision="FP16" type="Convolution">
			<data dilations="1,1" group="1" kernel="1,1" output="128" pads_begin="0,0" pads_end="0,0" strides="1,1"/>
			<input>
				<port id="0">
					<dim>1</dim>
					<dim>64</dim>
					<dim>32</dim>
					<dim>64</dim>
				</port>
			</input>
			<output>
				<port id="3">
					<dim>1</dim>
					<dim>128</dim>
					<dim>32</dim>
					<dim>64</dim>
				</port>
			</output>
			<blobs>
				<weights offset="13349900" size="16384"/>
				<biases offset="13366284" size="256"/>
			</blobs>
		</layer>
		<layer id="156" name="AddBackward1240" precision="FP16" type="Eltwise">
			<data coeff="" operation="sum"/>
			<input>
				<port id="0">
					<dim>1</dim>
					<dim>128</dim>
					<dim>32</dim>
					<dim>64</dim>
				</port>
				<port id="1">
					<dim>1</dim>
					<dim>128</dim>
					<dim>32</dim>
					<dim>64</dim>
				</port>
			</input>
			<output>
				<port id="2">
					<dim>1</dim>
					<dim>128</dim>
					<dim>32</dim>
					<dim>64</dim>
				</port>
			</output>
		</layer>
		<layer id="157" name="ThresholdBackward241" precision="FP16" type="ReLU">
			<data negative_slope="0.0"/>
			<input>
				<port id="0">
					<dim>1</dim>
					<dim>128</dim>
					<dim>32</dim>
					<dim>64</dim>
				</port>
			</input>
			<output>
				<port id="1">
					<dim>1</dim>
					<dim>128</dim>
					<dim>32</dim>
					<dim>64</dim>
				</port>
			</output>
		</layer>
		<layer id="158" name="UpsamplingBilinear2dBackward242" precision="FP16" type="Interp">
			<data align_corners="1" height="0" pad_beg="0" pad_end="0" shrink_factor="1" width="0" zoom_factor="2"/>
			<input>
				<port id="0">
					<dim>1</dim>
					<dim>128</dim>
					<dim>32</dim>
					<dim>64</dim>
				</port>
			</input>
			<output>
				<port id="1">
					<dim>1</dim>
					<dim>128</dim>
					<dim>64</dim>
					<dim>128</dim>
				</port>
			</output>
		</layer>
		<layer id="159" name="ConvNdBackward243" precision="FP16" type="Convolution">
			<data dilations="1,1" group="1" kernel="1,1" output="20" pads_begin="0,0" pads_end="0,0" strides="1,1"/>
			<input>
				<port id="0">
					<dim>1</dim>
					<dim>128</dim>
					<dim>64</dim>
					<dim>128</dim>
				</port>
			</input>
			<output>
				<port id="3">
					<dim>1</dim>
					<dim>20</dim>
					<dim>64</dim>
					<dim>128</dim>
				</port>
			</output>
			<blobs>
				<weights offset="13366540" size="5120"/>
				<biases offset="13371660" size="40"/>
			</blobs>
		</layer>
		<layer id="160" name="LogSoftmaxBackward244" precision="FP16" type="SoftMax">
			<data axis="1"/>
			<input>
				<port id="0">
					<dim>1</dim>
					<dim>20</dim>
					<dim>64</dim>
					<dim>128</dim>
				</port>
			</input>
			<output>
				<port id="1">
					<dim>1</dim>
					<dim>20</dim>
					<dim>64</dim>
					<dim>128</dim>
				</port>
			</output>
		</layer>
		<layer id="161" name="UpsamplingBilinear2dBackward245" precision="FP16" type="Interp">
			<data align_corners="1" height="0" pad_beg="0" pad_end="0" shrink_factor="1" width="0" zoom_factor="4"/>
			<input>
				<port id="0">
					<dim>1</dim>
					<dim>20</dim>
					<dim>64</dim>
					<dim>128</dim>
				</port>
			</input>
			<output>
				<port id="1">
					<dim>1</dim>
					<dim>20</dim>
					<dim>256</dim>
					<dim>512</dim>
				</port>
			</output>
		</layer>
		<layer id="162" name="Rectify/Mul_" precision="FP16" type="ScaleShift">
			<input>
				<port id="0">
					<dim>1</dim>
					<dim>20</dim>
					<dim>256</dim>
					<dim>512</dim>
				</port>
			</input>
			<output>
				<port id="3">
					<dim>1</dim>
					<dim>20</dim>
					<dim>256</dim>
					<dim>512</dim>
				</port>
			</output>
			<blobs>
				<weights offset="13371700" size="40"/>
				<biases offset="13371740" size="40"/>
			</blobs>
		</layer>
	</layers>
	<edges>
		<edge from-layer="0" from-port="0" to-layer="1" to-port="0"/>
		<edge from-layer="1" from-port="3" to-layer="2" to-port="0"/>
		<edge from-layer="2" from-port="1" to-layer="3" to-port="0"/>
		<edge from-layer="3" from-port="3" to-layer="4" to-port="0"/>
		<edge from-layer="4" from-port="1" to-layer="5" to-port="0"/>
		<edge from-layer="5" from-port="3" to-layer="6" to-port="0"/>
		<edge from-layer="6" from-port="1" to-layer="7" to-port="0"/>
		<edge from-layer="7" from-port="3" to-layer="8" to-port="0"/>
		<edge from-layer="8" from-port="1" to-layer="9" to-port="0"/>
		<edge from-layer="9" from-port="1" to-layer="10" to-port="0"/>
		<edge from-layer="10" from-port="3" to-layer="11" to-port="0"/>
		<edge from-layer="11" from-port="1" to-layer="12" to-port="0"/>
		<edge from-layer="12" from-port="3" to-layer="13" to-port="0"/>
		<edge from-layer="13" from-port="1" to-layer="14" to-port="0"/>
		<edge from-layer="9" from-port="1" to-layer="15" to-port="0"/>
		<edge from-layer="14" from-port="3" to-layer="16" to-port="0"/>
		<edge from-layer="15" from-port="3" to-layer="16" to-port="1"/>
		<edge from-layer="16" from-port="2" to-layer="17" to-port="0"/>
		<edge from-layer="17" from-port="1" to-layer="18" to-port="0"/>
		<edge from-layer="18" from-port="3" to-layer="19" to-port="0"/>
		<edge from-layer="19" from-port="1" to-layer="20" to-port="0"/>
		<edge from-layer="20" from-port="3" to-layer="21" to-port="0"/>
		<edge from-layer="21" from-port="1" to-layer="22" to-port="0"/>
		<edge from-layer="22" from-port="3" to-layer="23" to-port="0"/>
		<edge from-layer="17" from-port="1" to-layer="23" to-port="1"/>
		<edge from-layer="23" from-port="2" to-layer="24" to-port="0"/>
		<edge from-layer="24" from-port="1" to-layer="25" to-port="0"/>
		<edge from-layer="25" from-port="3" to-layer="26" to-port="0"/>
		<edge from-layer="26" from-port="1" to-layer="27" to-port="0"/>
		<edge from-layer="27" from-port="3" to-layer="28" to-port="0"/>
		<edge from-layer="28" from-port="1" to-layer="29" to-port="0"/>
		<edge from-layer="24" from-port="1" to-layer="30" to-port="0"/>
		<edge from-layer="29" from-port="3" to-layer="31" to-port="0"/>
		<edge from-layer="30" from-port="1" to-layer="31" to-port="1"/>
		<edge from-layer="31" from-port="2" to-layer="32" to-port="0"/>
		<edge from-layer="32" from-port="1" to-layer="33" to-port="0"/>
		<edge from-layer="33" from-port="3" to-layer="34" to-port="0"/>
		<edge from-layer="34" from-port="1" to-layer="35" to-port="0"/>
		<edge from-layer="35" from-port="3" to-layer="36" to-port="0"/>
		<edge from-layer="36" from-port="1" to-layer="37" to-port="0"/>
		<edge from-layer="32" from-port="1" to-layer="38" to-port="0"/>
		<edge from-layer="37" from-port="3" to-layer="39" to-port="0"/>
		<edge from-layer="38" from-port="3" to-layer="39" to-port="1"/>
		<edge from-layer="39" from-port="2" to-layer="40" to-port="0"/>
		<edge from-layer="40" from-port="1" to-layer="41" to-port="0"/>
		<edge from-layer="41" from-port="1" to-layer="42" to-port="0"/>
		<edge from-layer="42" from-port="3" to-layer="43" to-port="0"/>
		<edge from-layer="43" from-port="1" to-layer="44" to-port="0"/>
		<edge from-layer="44" from-port="3" to-layer="45" to-port="0"/>
		<edge from-layer="45" from-port="1" to-layer="46" to-port="0"/>
		<edge from-layer="46" from-port="3" to-layer="47" to-port="0"/>
		<edge from-layer="41" from-port="1" to-layer="47" to-port="1"/>
		<edge from-layer="47" from-port="2" to-layer="48" to-port="0"/>
		<edge from-layer="48" from-port="1" to-layer="49" to-port="0"/>
		<edge from-layer="49" from-port="3" to-layer="50" to-port="0"/>
		<edge from-layer="50" from-port="1" to-layer="51" to-port="0"/>
		<edge from-layer="51" from-port="3" to-layer="52" to-port="0"/>
		<edge from-layer="52" from-port="1" to-layer="53" to-port="0"/>
		<edge from-layer="53" from-port="3" to-layer="54" to-port="0"/>
		<edge from-layer="48" from-port="1" to-layer="54" to-port="1"/>
		<edge from-layer="54" from-port="2" to-layer="55" to-port="0"/>
		<edge from-layer="55" from-port="1" to-layer="56" to-port="0"/>
		<edge from-layer="56" from-port="3" to-layer="57" to-port="0"/>
		<edge from-layer="57" from-port="1" to-layer="58" to-port="0"/>
		<edge from-layer="58" from-port="3" to-layer="59" to-port="0"/>
		<edge from-layer="59" from-port="1" to-layer="60" to-port="0"/>
		<edge from-layer="60" from-port="3" to-layer="61" to-port="0"/>
		<edge from-layer="55" from-port="1" to-layer="61" to-port="1"/>
		<edge from-layer="61" from-port="2" to-layer="62" to-port="0"/>
		<edge from-layer="62" from-port="1" to-layer="63" to-port="0"/>
		<edge from-layer="63" from-port="3" to-layer="64" to-port="0"/>
		<edge from-layer="64" from-port="1" to-layer="65" to-port="0"/>
		<edge from-layer="65" from-port="3" to-layer="66" to-port="0"/>
		<edge from-layer="66" from-port="1" to-layer="67" to-port="0"/>
		<edge from-layer="62" from-port="1" to-layer="68" to-port="0"/>
		<edge from-layer="67" from-port="3" to-layer="69" to-port="0"/>
		<edge from-layer="68" from-port="3" to-layer="69" to-port="1"/>
		<edge from-layer="69" from-port="2" to-layer="70" to-port="0"/>
		<edge from-layer="70" from-port="1" to-layer="71" to-port="0"/>
		<edge from-layer="71" from-port="3" to-layer="72" to-port="0"/>
		<edge from-layer="72" from-port="1" to-layer="73" to-port="0"/>
		<edge from-layer="73" from-port="3" to-layer="74" to-port="0"/>
		<edge from-layer="74" from-port="1" to-layer="75" to-port="0"/>
		<edge from-layer="75" from-port="3" to-layer="76" to-port="0"/>
		<edge from-layer="70" from-port="1" to-layer="76" to-port="1"/>
		<edge from-layer="76" from-port="2" to-layer="77" to-port="0"/>
		<edge from-layer="77" from-port="1" to-layer="78" to-port="0"/>
		<edge from-layer="78" from-port="3" to-layer="79" to-port="0"/>
		<edge from-layer="79" from-port="1" to-layer="80" to-port="0"/>
		<edge from-layer="80" from-port="3" to-layer="81" to-port="0"/>
		<edge from-layer="81" from-port="1" to-layer="82" to-port="0"/>
		<edge from-layer="82" from-port="3" to-layer="83" to-port="0"/>
		<edge from-layer="77" from-port="1" to-layer="83" to-port="1"/>
		<edge from-layer="83" from-port="2" to-layer="84" to-port="0"/>
		<edge from-layer="84" from-port="1" to-layer="85" to-port="0"/>
		<edge from-layer="85" from-port="3" to-layer="86" to-port="0"/>
		<edge from-layer="86" from-port="1" to-layer="87" to-port="0"/>
		<edge from-layer="87" from-port="3" to-layer="88" to-port="0"/>
		<edge from-layer="88" from-port="1" to-layer="89" to-port="0"/>
		<edge from-layer="89" from-port="3" to-layer="90" to-port="0"/>
		<edge from-layer="84" from-port="1" to-layer="90" to-port="1"/>
		<edge from-layer="90" from-port="2" to-layer="91" to-port="0"/>
		<edge from-layer="91" from-port="1" to-layer="92" to-port="0"/>
		<edge from-layer="92" from-port="3" to-layer="93" to-port="0"/>
		<edge from-layer="93" from-port="1" to-layer="94" to-port="0"/>
		<edge from-layer="94" from-port="3" to-layer="95" to-port="0"/>
		<edge from-layer="95" from-port="1" to-layer="96" to-port="0"/>
		<edge from-layer="96" from-port="3" to-layer="97" to-port="0"/>
		<edge from-layer="91" from-port="1" to-layer="97" to-port="1"/>
		<edge from-layer="97" from-port="2" to-layer="98" to-port="0"/>
		<edge from-layer="98" from-port="1" to-layer="99" to-port="0"/>
		<edge from-layer="99" from-port="3" to-layer="100" to-port="0"/>
		<edge from-layer="100" from-port="1" to-layer="101" to-port="0"/>
		<edge from-layer="101" from-port="3" to-layer="102" to-port="0"/>
		<edge from-layer="102" from-port="1" to-layer="103" to-port="0"/>
		<edge from-layer="103" from-port="3" to-layer="104" to-port="0"/>
		<edge from-layer="98" from-port="1" to-layer="104" to-port="1"/>
		<edge from-layer="104" from-port="2" to-layer="105" to-port="0"/>
		<edge from-layer="105" from-port="1" to-layer="106" to-port="0"/>
		<edge from-layer="106" from-port="3" to-layer="107" to-port="0"/>
		<edge from-layer="107" from-port="1" to-layer="108" to-port="0"/>
		<edge from-layer="108" from-port="3" to-layer="109" to-port="0"/>
		<edge from-layer="109" from-port="1" to-layer="110" to-port="0"/>
		<edge from-layer="105" from-port="1" to-layer="111" to-port="0"/>
		<edge from-layer="110" from-port="3" to-layer="112" to-port="0"/>
		<edge from-layer="111" from-port="3" to-layer="112" to-port="1"/>
		<edge from-layer="112" from-port="2" to-layer="113" to-port="0"/>
		<edge from-layer="113" from-port="1" to-layer="114" to-port="0"/>
		<edge from-layer="114" from-port="3" to-layer="115" to-port="0"/>
		<edge from-layer="115" from-port="1" to-layer="116" to-port="0"/>
		<edge from-layer="116" from-port="3" to-layer="117" to-port="0"/>
		<edge from-layer="117" from-port="1" to-layer="118" to-port="0"/>
		<edge from-layer="118" from-port="3" to-layer="119" to-port="0"/>
		<edge from-layer="113" from-port="1" to-layer="119" to-port="1"/>
		<edge from-layer="119" from-port="2" to-layer="120" to-port="0"/>
		<edge from-layer="120" from-port="1" to-layer="121" to-port="0"/>
		<edge from-layer="121" from-port="3" to-layer="122" to-port="0"/>
		<edge from-layer="122" from-port="1" to-layer="123" to-port="0"/>
		<edge from-layer="123" from-port="3" to-layer="124" to-port="0"/>
		<edge from-layer="124" from-port="1" to-layer="125" to-port="0"/>
		<edge from-layer="125" from-port="3" to-layer="126" to-port="0"/>
		<edge from-layer="120" from-port="1" to-layer="126" to-port="1"/>
		<edge from-layer="126" from-port="2" to-layer="127" to-port="0"/>
		<edge from-layer="127" from-port="1" to-layer="128" to-port="0"/>
		<edge from-layer="128" from-port="1" to-layer="129" to-port="0"/>
		<edge from-layer="127" from-port="1" to-layer="130" to-port="0"/>
		<edge from-layer="129" from-port="1" to-layer="130" to-port="1"/>
		<edge from-layer="127" from-port="1" to-layer="131" to-port="0"/>
		<edge from-layer="131" from-port="1" to-layer="132" to-port="0"/>
		<edge from-layer="130" from-port="2" to-layer="133" to-port="0"/>
		<edge from-layer="132" from-port="1" to-layer="133" to-port="1"/>
		<edge from-layer="127" from-port="1" to-layer="134" to-port="0"/>
		<edge from-layer="134" from-port="1" to-layer="135" to-port="0"/>
		<edge from-layer="133" from-port="2" to-layer="136" to-port="0"/>
		<edge from-layer="135" from-port="1" to-layer="136" to-port="1"/>
		<edge from-layer="127" from-port="1" to-layer="137" to-port="0"/>
		<edge from-layer="137" from-port="1" to-layer="138" to-port="0"/>
		<edge from-layer="136" from-port="2" to-layer="139" to-port="0"/>
		<edge from-layer="138" from-port="1" to-layer="139" to-port="1"/>
		<edge from-layer="139" from-port="2" to-layer="140" to-port="0"/>
		<edge from-layer="140" from-port="3" to-layer="141" to-port="0"/>
		<edge from-layer="141" from-port="1" to-layer="142" to-port="0"/>
		<edge from-layer="142" from-port="1" to-layer="143" to-port="0"/>
		<edge from-layer="40" from-port="1" to-layer="144" to-port="0"/>
		<edge from-layer="143" from-port="3" to-layer="145" to-port="0"/>
		<edge from-layer="144" from-port="3" to-layer="145" to-port="1"/>
		<edge from-layer="145" from-port="2" to-layer="146" to-port="0"/>
		<edge from-layer="146" from-port="1" to-layer="147" to-port="0"/>
		<edge from-layer="147" from-port="1" to-layer="148" to-port="0"/>
		<edge from-layer="1" from-port="3" to-layer="149" to-port="0"/>
		<edge from-layer="149" from-port="3" to-layer="150" to-port="0"/>
		<edge from-layer="150" from-port="1" to-layer="151" to-port="0"/>
		<edge from-layer="151" from-port="3" to-layer="152" to-port="0"/>
		<edge from-layer="152" from-port="1" to-layer="153" to-port="0"/>
		<edge from-layer="153" from-port="3" to-layer="154" to-port="0"/>
		<edge from-layer="154" from-port="1" to-layer="155" to-port="0"/>
		<edge from-layer="148" from-port="3" to-layer="156" to-port="0"/>
		<edge from-layer="155" from-port="3" to-layer="156" to-port="1"/>
		<edge from-layer="156" from-port="2" to-layer="157" to-port="0"/>
		<edge from-layer="157" from-port="1" to-layer="158" to-port="0"/>
		<edge from-layer="158" from-port="1" to-layer="159" to-port="0"/>
		<edge from-layer="159" from-port="3" to-layer="160" to-port="0"/>
		<edge from-layer="160" from-port="1" to-layer="161" to-port="0"/>
		<edge from-layer="161" from-port="1" to-layer="162" to-port="0"/>
	</edges>
	<meta_data>
		<MO_version value="1.5.4.dacdc0a0"/>
		<cli_parameters>
			<data_type value="FP16"/>
			<disable_fusing value="False"/>
			<disable_gfusing value="False"/>
			<disable_nhwc_to_nchw value="False"/>
			<disable_omitting_optional value="False"/>
			<disable_resnet_optimization value="False"/>
			<enable_flattening_nested_params value="False"/>
			<extensions value="DIR"/>
			<framework value="caffe"/>
			<generate_deprecated_IR_V2 value="False"/>
			<input value="data"/>
			<input_model value="DIR/model.caffemodel"/>
			<input_model_is_text value="False"/>
			<input_proto value="DIR/model.prototxt"/>
			<input_shape value="[1,3,1024,2048]"/>
			<k value="DIR/CustomLayersMapping.xml"/>
			<legacy_mxnet_model value="False"/>
			<log_level value="ERROR"/>
			<mean_values value="()"/>
			<model_name value="semantic-segmentation-adas-0001"/>
			<move_to_preprocess value="False"/>
			<offload_unsupported_operations_to_tf value="False"/>
			<output value="Rectify/Mul_"/>
			<output_dir value="DIR"/>
			<remove_output_softmax value="False"/>
			<reverse_input_channels value="False"/>
			<save_params_from_nd value="False"/>
			<scale_values value="()"/>
			<silent value="False"/>
			<version value="False"/>
			<unset unset_cli_parameters="batch, counts, finegrain_fusing, freeze_placeholder_with_value, input_checkpoint, input_meta_graph, input_symbol, mean_file, mean_file_offsets, nd_prefix_name, pretrained_model_name, saved_model_dir, saved_model_tags, scale, tensorboard_logdir, tensorflow_custom_layer_libraries, tensorflow_custom_operations_config_update, tensorflow_object_detection_api_pipeline_config, tensorflow_operation_patterns, tensorflow_subgraph_patterns, tensorflow_use_custom_operations_config"/>
		</cli_parameters>
	</meta_data>
</net>

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants