Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fix QNSPSA Optimizer #5439

Merged
merged 5 commits into from
Apr 1, 2024
Merged

Fix QNSPSA Optimizer #5439

merged 5 commits into from
Apr 1, 2024

Conversation

albi3ro
Copy link
Contributor

@albi3ro albi3ro commented Mar 26, 2024

Fixes #5437 . [sc-59838]

When we started distinguishing vanilla numpy and autograd numpy in our source code, we accidentally switched to using vanilla numpy in the QNSPSA optimizer instead of autograd numpy. This switches it back.

Copy link

codecov bot commented Mar 26, 2024

Codecov Report

All modified and coverable lines are covered by tests ✅

Project coverage is 99.67%. Comparing base (1bb10be) to head (efed07b).
Report is 2 commits behind head on master.

Additional details and impacted files
@@            Coverage Diff             @@
##           master    #5439      +/-   ##
==========================================
- Coverage   99.68%   99.67%   -0.01%     
==========================================
  Files         402      402              
  Lines       37534    37251     -283     
==========================================
- Hits        37414    37130     -284     
- Misses        120      121       +1     

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

Copy link
Contributor

@isaacdevlugt isaacdevlugt left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks Christina! The fix works for me.

doc/releases/changelog-dev.md Outdated Show resolved Hide resolved
Co-authored-by: Isaac De Vlugt <34751083+isaacdevlugt@users.noreply.github.com>
@astralcai astralcai self-requested a review April 1, 2024 13:19
Copy link
Contributor

@PietropaoloFrisoni PietropaoloFrisoni left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks!

Copy link
Contributor

@astralcai astralcai left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I have a question: I see that here pennylane.numpy is used in all cases instead of vanilla numpy. Is this fine or should we differentiate based on whether trainability is needed for each particular case?

@albi3ro
Copy link
Contributor Author

albi3ro commented Apr 1, 2024

I have a question: I see that here pennylane.numpy is used in all cases instead of vanilla numpy. Is this fine or should we differentiate based on whether trainability is needed for each particular case?

@astralcai Something we could potentially look into. Some of the variables we are creating might not actually end up being trainable, and could just being intermediaries. But the optimizers are autograd-only, so we don't have to worry about mixing ml-framework types. I'm not as concerned about using autograd numpy as I would be in other parts of the code base.

@albi3ro albi3ro enabled auto-merge (squash) April 1, 2024 14:08
@albi3ro albi3ro merged commit 8327ffa into master Apr 1, 2024
40 checks passed
@albi3ro albi3ro deleted the qnspsa-fix branch April 1, 2024 15:00
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

[BUG] QNSPSAOptimizer with Autograd changes differentiable parameters type to vanilla NumPy array
4 participants