-
Notifications
You must be signed in to change notification settings - Fork 603
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Fix QNSPSA
Optimizer
#5439
Fix QNSPSA
Optimizer
#5439
Conversation
Codecov ReportAll modified and coverable lines are covered by tests ✅
Additional details and impacted files@@ Coverage Diff @@
## master #5439 +/- ##
==========================================
- Coverage 99.68% 99.67% -0.01%
==========================================
Files 402 402
Lines 37534 37251 -283
==========================================
- Hits 37414 37130 -284
- Misses 120 121 +1 ☔ View full report in Codecov by Sentry. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks Christina! The fix works for me.
Co-authored-by: Isaac De Vlugt <34751083+isaacdevlugt@users.noreply.github.com>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks!
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I have a question: I see that here pennylane.numpy
is used in all cases instead of vanilla numpy
. Is this fine or should we differentiate based on whether trainability is needed for each particular case?
@astralcai Something we could potentially look into. Some of the variables we are creating might not actually end up being trainable, and could just being intermediaries. But the optimizers are autograd-only, so we don't have to worry about mixing ml-framework types. I'm not as concerned about using autograd numpy as I would be in other parts of the code base. |
Fixes #5437 . [sc-59838]
When we started distinguishing vanilla numpy and autograd numpy in our source code, we accidentally switched to using vanilla numpy in the
QNSPSA
optimizer instead of autograd numpy. This switches it back.