-
Notifications
You must be signed in to change notification settings - Fork 586
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add support for the parameter-shift hessian to the Torch interface #1129
Changes from 11 commits
cf8e879
0f9dec6
26f509a
de830d1
5be38ce
2eae4e8
7782493
01c2145
c6efa64
71e97b8
7ac39dc
e9ee54b
12cf030
2053bee
9bedc70
39e9e97
388a0be
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
Original file line number | Diff line number | Diff line change |
---|---|---|
|
@@ -2,6 +2,39 @@ | |
|
||
<h3>New features since last release</h3> | ||
|
||
* Computing second derivatives and Hessians of QNodes is now supported when | ||
using the PyTorch interface. | ||
[(#1129)](https://github.com/PennyLaneAI/pennylane/pull/1129/files) | ||
|
||
Hessians are computed using the parameter-shift rule, and can be | ||
evaluated on both hardware and simulator devices. | ||
|
||
```python | ||
from torch.autograd.functional import jacobian, hessian | ||
dev = qml.device('default.qubit', wires=1) | ||
|
||
@qml.qnode(dev, interface='torch', diff_method="parameter-shift") | ||
def circuit(p): | ||
qml.RY(p[0], wires=0) | ||
qml.RX(p[1], wires=0) | ||
return qml.expval(qml.PauliZ(0)) | ||
|
||
x = torch.tensor([1.0, 2.0], requires_grad=True) | ||
``` | ||
|
||
```python | ||
>>> circuit(x) | ||
tensor([0.3876, 0.6124], dtype=torch.float64, grad_fn=<SqueezeBackward0>) | ||
>>> jacobian(circuit, x) | ||
tensor([[ 0.1751, -0.2456], | ||
[-0.1751, 0.2456]], grad_fn=<ViewBackward>) | ||
>>> hessian(circuit, x) | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. This is really cool! There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Yep! With a disclaimer that I'm not entirely sure how pytorch is ordering the dimensions here There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. The output below agrees with TF and Autograd, but should double check this |
||
tensor([[[ 0.1124, 0.3826], | ||
[ 0.3826, 0.1124]], | ||
[[-0.1124, -0.3826], | ||
[-0.3826, -0.1124]]]) | ||
``` | ||
|
||
* Adds a new optimizer `qml.ShotAdaptiveOptimizer`, a gradient-descent optimizer where | ||
the shot rate is adaptively calculated using the variances of the parameter-shift gradient. | ||
[(#1139)](https://github.com/PennyLaneAI/pennylane/pull/1139) | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
So for my understanding, we'd also like to add for other interfaces, but Torch is the first one we're doing (maybe because it's easier)?
So this is a continuation of #961?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We've got #1131 (autograd) and #1110 (tf) also open :) Just decided to split it into three to help with code review. But feel free to review the others as well if interested!