Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

added grad_fn kwarg to QNGOptimizer.step_and_cost and .step #1487

Merged
merged 18 commits into from
Aug 3, 2021

Conversation

dwierichs
Copy link
Contributor

The step_and_cost and step methods of QNGOptimizer currently do not accept the grad_fn argument like other optimizers.
In particular, this prevents a straight-forward use of JAX with QNG.

Changes
The kwarg grad_fn is accepted as explicit kwarg in the two above methods, passing them to compute_grad.

Benefits
Unifying the optimizer interfaces and enabling custom gradient function when using QNG.

Possible drawbacks
None (The order of existing kwargs was not changed but grad_fn was added as last kwarg, so that even when using the existing kwargs as positional args there is no breaking change.

@codecov
Copy link

codecov bot commented Jul 30, 2021

Codecov Report

Merging #1487 (17de12c) into master (24cc6c2) will increase coverage by 0.00%.
The diff coverage is 100.00%.

Impacted file tree graph

@@           Coverage Diff           @@
##           master    #1487   +/-   ##
=======================================
  Coverage   98.32%   98.32%           
=======================================
  Files         180      180           
  Lines       12705    12709    +4     
=======================================
+ Hits        12492    12496    +4     
  Misses        213      213           
Impacted Files Coverage Δ
pennylane/optimize/gradient_descent.py 100.00% <ø> (ø)
pennylane/optimize/qng.py 100.00% <100.00%> (ø)

Continue to review full report at Codecov.

Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update 24cc6c2...17de12c. Read the comment docs.

@josh146 josh146 added the enhancement ✨ New feature or request label Jul 31, 2021
@josh146 josh146 self-requested a review July 31, 2021 16:03
Copy link
Member

@josh146 josh146 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for this @dwierichs! Code itself is looking 💯, my main comment is to split up the test into distinct tests (one for grouped input, separate input, custom fn, etc) 🙂

.github/CHANGELOG.md Outdated Show resolved Hide resolved
pennylane/optimize/qng.py Outdated Show resolved Hide resolved
recompute_tensor (bool): Whether or not the metric tensor should
be recomputed. If not, the metric tensor from the previous
optimization step is used.
metric_tensor_fn (function): Optional metric tensor function
with respect to the variables ``x``.
If ``None``, the metric tensor function is computed automatically.
**kwargs : variable length of keyword arguments for the qnode
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Good catch. Was this needed because you were using keyword arguments in your notebook?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

No, this is to make QNGOptimizer more similar to GradientDescentOptimizer :)

pennylane/optimize/qng.py Show resolved Hide resolved
tests/test_optimize_qng.py Outdated Show resolved Hide resolved
Copy link
Member

@josh146 josh146 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💯 Changes look good @dwierichs, happy for this to be merged in!

@josh146 josh146 merged commit 73baaed into PennyLaneAI:master Aug 3, 2021
@dwierichs dwierichs deleted the QNG-step-kwargs branch August 3, 2021 11:53
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement ✨ New feature or request
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants