Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

add onnx export module, test=develop #27831

Merged
merged 6 commits into from
Nov 13, 2020
Merged

Conversation

Channingss
Copy link
Contributor

@Channingss Channingss commented Oct 12, 2020

PR types

New features

PR changes

APIs

Describe

Add onnx export module by lazy import paddle2onnx.

API中文文档预览:
image

暂缺API英文文档预览

@paddle-bot-old
Copy link

Thanks for your contribution!
Please wait for the result of CI firstly. See Paddle CI Manual for details.

@Channingss Channingss changed the title add onnx export module, test=develop [WIP] add onnx export module, test=develop Oct 12, 2020
Copy link
Member

@ZeyuChen ZeyuChen left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

文档方面需要使用最新的API,去掉fluid

return x * y, x / y

def export_with_input_spec():
paddle.fluid.enable_dygraph()
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

没有使用最新版本的API

def export_with_input_variable():
paddle.fluid.enable_dygraph()
model = Model()
x = paddle.fluid.dygraph.to_variable(np.array([1]).astype('float32'), name='x')
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

后面没有to_variable方法,统一使用paddle.to_tensor

__all__ = ['export']


def export(layer, save_file, input_spec=None, opset_version=9, **kwargs):
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

竞品的函数是否有返回值?还是只是到处文件就结束?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

看了一下torch,mxnet和oneflow都是写到文件里面,没有返回onnx的proto对象。

Copy link
Member

@ZeyuChen ZeyuChen left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

整体修改注释和细节问题。


def test_prune_graph(self):
model = Logic()
# prune model with input_spec and output_spec, which need to static and run model before export.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

代码注释请使用标注的英文语法注释,首字母大写

raise ValueError(
"The input path MUST be format of dirname/file_prefix "
"[dirname\\file_prefix in Windows system], but received "
"file_prefix is empty string.")
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

这里你的warning对path做了重命名为file_prefix,其实用户不知道这个变量是啥,用户只看到path这个变量。

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

所以考虑修改下这个ERROR让他更有提示意义

"""
Export Layer as ONNX format model, which can be used for inference.
Now, it supports a limited operater set and dynamic models.(e.g., MobileNet.)
More features and introduction, Please reference `Github <https://github.com/PaddlePaddle/paddle2onnx>`_ .
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

注释这里
Please reference xxxx -> Please refer to xxxx

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

done


def export(layer, path, input_spec=None, opset_version=9, **configs):
"""
Export Layer as ONNX format model, which can be used for inference.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Export Layer as -> Export Layer to

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

which can be used for inference with onnxruntime.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

done

def export(layer, path, input_spec=None, opset_version=9, **configs):
"""
Export Layer as ONNX format model, which can be used for inference.
Now, it supports a limited operater set and dynamic models.(e.g., MobileNet.)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

去掉这行注释,改为参考某一个paddle2onnx某个固定的README,了解最新支持的model zoo

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

直接删除了这一段描述,准备把op和模型支持的列表到时候写到教程里面。另外现在model_zoo还没有动态图的doc,等完善了再加上

method, which can be described by InputSpec or example Tensor. If None, all input variables of
the original Layer's forward method would be the inputs of the exported ``ONNX`` model. Default None.
opset_version(int, optional): Opset version of exported ONNX model.
Now, stable supported opset version include 9, 10, 11. Default 9.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Default 9 -> Default: 9

def forward(self, x):
return self._linear(x)

# export model with InputSpec, which supports set dynamic shape for inputs.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

注释中的注释也同意下语法规范。对于Paddle内部变量,需要用``阔起来,如InputSpec
简化下语法表达。

Export model with InputSpec to support dynamic input shape.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

尽量用短句注释

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

done

else:
return y

# export model with Tensor, which supports prune model by set 'output_spec' with output of model.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Export model with paddle.Tensor as input_spec, and model output as output_spec, onnx exporter will automatically prune out the inference model.

@@ -271,6 +271,8 @@
from . import static
from . import amp

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

为什么这里要莫名的空行?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

去掉了空行

@@ -271,6 +271,8 @@
from . import static
from . import amp

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

为什么这里要莫名的空行

Copy link
Member

@ZeyuChen ZeyuChen left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

只剩一点英文小意见。

def export(layer, path, input_spec=None, opset_version=9, **configs):
"""
Export Layer to ONNX format, which can be inferenced by onnxruntime or other backends.
More details, Please refer to `paddle2onnx <https://github.com/PaddlePaddle/paddle2onnx>`_ .
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Mode deteils -> For more details,

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

done


def export(layer, path, input_spec=None, opset_version=9, **configs):
"""
Export Layer to ONNX format, which can be inferenced by onnxruntime or other backends.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

which can be inferenced -> which can use for inference via onnxruntime or other backends.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

done

else:
return y

# Export model with 'Tensor' to support prune model by set 'output_spec'.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

pruned model

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

done

@Channingss Channingss changed the title [WIP] add onnx export module, test=develop add onnx export module, test=develop Nov 12, 2020
Copy link
Member

@ZeyuChen ZeyuChen left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

Copy link
Contributor

@TCChenlong TCChenlong left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM
TODO:review doc

Copy link
Contributor

@XiaoguangHu01 XiaoguangHu01 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

@ZeyuChen ZeyuChen merged commit c545b9b into PaddlePaddle:develop Nov 13, 2020
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants