You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Learning rate scheduler does not seem to behave as expected, and importantly, the learning rate schedule is different depending on whether you use pytorch version 1.13.0 versus 2.0.0
[001] lr = 5.905e-02
[002] lr = 3.138e-02
[003] lr = 1.668e-02
[004] lr = 8.863e-03
[005] lr = 4.710e-03
[006] lr = 2.503e-03
[007] lr = 1.330e-03
[008] lr = 7.070e-04
[009] lr = 3.757e-04
[010] lr = 1.997e-04
accompanied by the warning
UserWarning: Detected call of `lr_scheduler.step()` before `optimizer.step()`. In PyTorch 1.1.0 and later, you should call them in the opposite order: `optimizer.step()` before `lr_scheduler.step()`. Failure to do this will result in PyTorch skipping the first value of the learning rate schedule. See more details at https://pytorch.org/docs/stable/optim.html#how-to-adjust-learning-rate
warnings.warn("Detected call of `lr_scheduler.step()` before `optimizer.step()`. "
while for torch 1.13.0, I get
[001] lr = 1.000e-01
[002] lr = 9.000e-02
[003] lr = 8.100e-02
[004] lr = 7.290e-02
[005] lr = 6.561e-02
[006] lr = 5.905e-02
[007] lr = 5.314e-02
[008] lr = 4.783e-02
[009] lr = 4.305e-02
[010] lr = 3.874e-02
The latter is a lot closer to what I'd expect, given that
0.1 * 0.9**10 = 0.0348
I also don't get the warning from pytorch 1.13.0
If I do what I think should be the same thing in pytorch 1.13.0 without any pyro code, I see the following learning rate schedule that agrees with the pyro + pytorch 1.13.0 version above:
[001] lr = 1.000e-01
[002] lr = 9.000e-02
[003] lr = 8.100e-02
[004] lr = 7.290e-02
[005] lr = 6.561e-02
[006] lr = 5.905e-02
[007] lr = 5.314e-02
[008] lr = 4.783e-02
[009] lr = 4.305e-02
[010] lr = 3.874e-02
and the above with pytorch 2.0.0 gives me the same thing:
[001] lr = 1.000e-01
[002] lr = 9.000e-02
[003] lr = 8.100e-02
[004] lr = 7.290e-02
[005] lr = 6.561e-02
[006] lr = 5.905e-02
[007] lr = 5.314e-02
[008] lr = 4.783e-02
[009] lr = 4.305e-02
[010] lr = 3.874e-02
So I think this is somehow related to the pyro + pytorch interface.
The text was updated successfully, but these errors were encountered:
Issue Description
Learning rate scheduler does not seem to behave as expected, and importantly, the learning rate schedule is different depending on whether you use pytorch version 1.13.0 versus 2.0.0
Environment
pyro-ppl 1.8.4+dd4e0f81
Code Snippet
for
torch 2.0.0
, I getaccompanied by the warning
while for
torch 1.13.0
, I getThe latter is a lot closer to what I'd expect, given that
0.1 * 0.9**10 = 0.0348
I also don't get the warning from pytorch 1.13.0
If I do what I think should be the same thing in pytorch 1.13.0 without any pyro code, I see the following learning rate schedule that agrees with the pyro + pytorch 1.13.0 version above:
gives me, using pytorch 1.13.0,
and the above with pytorch 2.0.0 gives me the same thing:
So I think this is somehow related to the pyro + pytorch interface.
The text was updated successfully, but these errors were encountered: