-
Notifications
You must be signed in to change notification settings - Fork 5.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[xdoctest][task 122] Reformat example code with google style in python/paddle/quantization/qat.py #56233
Conversation
你的PR提交成功,感谢你对开源项目的贡献! |
✅ This PR's description meets the template requirements! |
python/paddle/quantization/qat.py
Outdated
.. code-block:: python | ||
from paddle.quantization import QAT, QuantConfig | ||
from paddle.quantization.quanters import FakeQuanterWithAbsMaxObserver | ||
from paddle.vision.models import LeNet | ||
|
||
quanter = FakeQuanterWithAbsMaxObserver(moving_rate=0.9) | ||
q_config = QuantConfig(activation=quanter, weight=quanter) | ||
qat = QAT(q_config) | ||
model = LeNet() | ||
quant_model = qat.quantize(model) | ||
print(quant_model) | ||
>>> from paddle.quantization import QAT, QuantConfig | ||
>>> from paddle.quantization.quanters import FakeQuanterWithAbsMaxObserver | ||
>>> from paddle.vision.models import LeNet | ||
|
||
>>> quanter = FakeQuanterWithAbsMaxObserver(moving_rate=0.9) | ||
>>> q_config = QuantConfig(activation=quanter, weight=quanter) | ||
>>> qat = QAT(q_config) | ||
>>> model = LeNet() | ||
>>> quant_model = qat.quantize(model) | ||
>>> print(quant_model) | ||
LeNet( | ||
(features): Sequential( | ||
(0): QuantedConv2D( | ||
(weight_quanter): FakeQuanterWithAbsMaxObserverLayer() | ||
(activation_quanter): FakeQuanterWithAbsMaxObserverLayer() | ||
) | ||
(1): ObserveWrapper( | ||
(_observer): FakeQuanterWithAbsMaxObserverLayer() | ||
(_observed): ReLU() | ||
) | ||
(2): ObserveWrapper( | ||
(_observer): FakeQuanterWithAbsMaxObserverLayer() | ||
(_observed): MaxPool2D(kernel_size=2, stride=2, padding=0) | ||
) | ||
(3): QuantedConv2D( | ||
(weight_quanter): FakeQuanterWithAbsMaxObserverLayer() | ||
(activation_quanter): FakeQuanterWithAbsMaxObserverLayer() | ||
) | ||
(4): ObserveWrapper( | ||
(_observer): FakeQuanterWithAbsMaxObserverLayer() | ||
(_observed): ReLU() | ||
) | ||
(5): ObserveWrapper( | ||
(_observer): FakeQuanterWithAbsMaxObserverLayer() | ||
(_observed): MaxPool2D(kernel_size=2, stride=2, padding=0) | ||
) | ||
) | ||
(fc): Sequential( | ||
(0): QuantedLinear( | ||
(weight_quanter): FakeQuanterWithAbsMaxObserverLayer() | ||
(activation_quanter): FakeQuanterWithAbsMaxObserverLayer() | ||
) | ||
(1): QuantedLinear( | ||
(weight_quanter): FakeQuanterWithAbsMaxObserverLayer() | ||
(activation_quanter): FakeQuanterWithAbsMaxObserverLayer() | ||
) | ||
(2): QuantedLinear( | ||
(weight_quanter): FakeQuanterWithAbsMaxObserverLayer() | ||
(activation_quanter): FakeQuanterWithAbsMaxObserverLayer() | ||
) | ||
) | ||
) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
整体缩进层级 +1 吧
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
好哒,已修改~~
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTMeow 🐾
…n/paddle/quantization/qat.py (PaddlePaddle#56233) * [xdoctest][task 122] test=docs_preview * test=document_fix * fix indent --------- Co-authored-by: SigureMo <sigure.qaq@gmail.com>
PR types
Others
PR changes
Docs
Description
对应tracking issue: #55629
预览链接: