-
Notifications
You must be signed in to change notification settings - Fork 603
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[BugFix] Error in documentation's numpy differentiation code example #1499
Conversation
Hello. You may have forgotten to update the changelog!
|
@@ -163,20 +163,22 @@ and two non-trainable arguments ``data`` and ``wires``: | |||
qml.CNOT(wires=[wires[0], wires[2]]) | |||
return qml.expval(qml.PauliZ(wires[0])) | |||
|
|||
We must specify that ``data`` and ``wires`` are NumPy arrays with ``requires_grad=False``: | |||
Since ``wires`` is a keyword argument, it will be automatically non-trainable. However, we |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@josh146 will this also be true after we give up the backwards compatibility hack?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Nope! In fact, this isn't even a PL thing, but an autograd thing - it is autograd that is ignoring keyword arguments 😆 So maybe best to reword this?
Codecov Report
@@ Coverage Diff @@
## master #1499 +/- ##
=======================================
Coverage 98.36% 98.36%
=======================================
Files 182 182
Lines 12933 12933
=======================================
Hits 12722 12722
Misses 211 211 Continue to review full report at Codecov.
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for getting to this so quickly @mariaschuld! I think we also need to solve the underlying issue (the backwards compatibility 'everything in a positional argument is trainable') ASAP
@@ -163,20 +163,22 @@ and two non-trainable arguments ``data`` and ``wires``: | |||
qml.CNOT(wires=[wires[0], wires[2]]) | |||
return qml.expval(qml.PauliZ(wires[0])) | |||
|
|||
We must specify that ``data`` and ``wires`` are NumPy arrays with ``requires_grad=False``: | |||
Since ``wires`` is a keyword argument, it will be automatically non-trainable. However, we |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Nope! In fact, this isn't even a PL thing, but an autograd thing - it is autograd that is ignoring keyword arguments 😆 So maybe best to reword this?
>>> circuit(weights, data, wires) | ||
0.16935626052294817 | ||
>>> wires = [2, 0, 1] | ||
>>> circuit(weights, data, wires=wires) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
As far as I am aware, this is the only code change necessary to make it work, the change above (wires=None
) should not be needed
When we compute the derivative, arguments with ``requires_grad=False``, as well as keyword arguments, | ||
are ignored by :func:`~.grad`: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
👍
I think this is all that needs to be said?
Co-authored-by: Josh Izaac <josh146@gmail.com>
@@ -122,12 +122,12 @@ Differentiable and non-differentiable arguments | |||
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ | |||
|
|||
How does PennyLane know which arguments of a quantum function to differentiate, and which to ignore? | |||
For example, you may want to pass arguments as positional arguments to a QNode but *not* have | |||
For example, you may want to pass arguments to a QNode but *not* have |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I decided to polish the entire paragraph, so it makes sense together. I think here as positional arguments
is confusing because we only state the rule about positional arguments below.
PennyLane consider them when computing gradients. | ||
|
||
As a basic rule, **all positional arguments provided to the QNode are assumed to be differentiable | ||
by default**. To accomplish this, all arrays created by the PennyLane NumPy module have a special | ||
flag ``requires_grad`` specifying whether they are trainable or not: | ||
by default**. To accomplish this, arguments are internally turned into arrays of the PennyLane NumPy module, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think this piece of information is super important!
@josh146 I updated the entire section to be clearer. We will only need to make one or two text adjustments when the upcoming change happens, but the examples should work either way. Do you think this is better? |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks @mariaschuld, reads 💯 from my end!
Context:
Updates a failing code example.
Description of the Change:
Use keyword argument to mark a non-differentiable wires argument.
Add a random seed to make example reproducable.
Updated example reads:
Related GitHub Issues:
#1498