-
Notifications
You must be signed in to change notification settings - Fork 586
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[bug] Declaring provides_jacobian=True capability breaks device with current master #2576
Comments
Thanks @cvjjm for catching this! Actually, I wonder why our test added in #1291 didn't catch this. @chaeyeunpark I'm guessing this was introduced in #2448? |
That is indeed surprising that the test from #1291 doesn't fail. I must admit that I didn't have time to check this carefully. Just noticed that some of my code does no longer work with master and that commenting this line fixes things. |
I went hunting for the test; since we recently removed old unused tape subclasses, it seems that this test was moved from pennylane/tests/gradients/test_parameter_shift.py Lines 1032 to 1034 in 1a13bef
|
Thanks @josh146 I didn't have time to look into this more but can report that differentiating such non standard observables is also broken in the current master:
Both were still working with |
The root cause of this turned out to be something entirely different :-) The device with which I discovered the problem also provides a device gradient and this is what is broken now. Surprisingly, just declaring that a device provides a gradient now makes it unusable, even for evaluating, say, paramerter shift gradients of QNodes as this minimal example shows (notice how I explicitely ask for
which results in:
|
Thanks @cvjjm, this is helpful, but also troubling 😬
I am very familiar with this traceback, and it always scares me since it is very hard to debug; it's an indication that a broadcasting rule we are using --- that is permitted on the forward pass --- is breaking on the autograd backwards pass 😬 And the traceback doesn't indicate where in the forward pass the issue is! |
Luckily it is still working with a fairly recent version, so should be possible to find the problem by bisecting. Maybe the above example can become a test to catch such problems in the future. |
Hi @cvjjm, this seems to boil down to the change we chatted about around the release of v0.23.0. It seems like the logic added there did start causing issues after all 😿 In specific, a new if device.capabilities().get("provides_jacobian", False):
# in the case where the device provides the jacobian,
# the output of grad_fn must be wrapped in a tuple in
# order to match the input parameters to _execute.
return (return_vjps,)
return return_vjps This was required to help with a device we have in the PennyLane-SF plugin. It seems, however, that as you suggest, it breaks other cases. Removing the if statement makes the listed example to execute well. |
We could definitely make a fix to this. I'm curious: how severe would this bug be in its current form? I.e., would it bring value to make a bug fix release such that it's |
Oh! I guess my take home form the above is then:
I would highly appreciate a 0.23.X bugfix release, not because I cannot wait until the 21st but because it is nice having at least some v0.23.X version that I do not need to black list because of this :-) |
Just as a reminder: The original issue from #2576 (comment) is also still open. Thanks @josh146 for hunting down the test that was meant to preserve the feature of non standard return types! Looking at the test, it is fairly clear that the reason for why this was not triggered is that someone simply re-declared the pennylane/tests/gradients/test_parameter_shift.py Lines 1072 to 1074 in 1a13bef
This makes the test pass but then means that the device can only ever compute QNode s that return a SpecialObsevable .
What I need though is a device that supports standard observables and my special observable. Probably the test can/should be expanded to also have the device evaluate a |
Hi, as far as I can tell the test is still in the "vandalized" state in the current master. Is there a chance we can get this functionality restored in the next release? |
Hi @cvjjm, apologies that the progress here has slowed down on the original issue, we can definitely have this in for the next release (v0.25.0). Thanks for the reminder! 👍 |
Hi @cvjjm, I created a pr implementing the custom object fix, plus re-adding a testcase that seems to have disappeared from pl . But this kind of solution I don't think would work in the case of jax, torch, tf, since in general we register a custom primitive with each of these frameworks (representing the circuit) that must return a tensor. The only solution I can think is some packing and unpacking mechanism, similar to jax pytrees. |
Thanks! That fix looks perfect and, yes, I agree that this will only work with autograd/numpy, but that is fine. There is an additional change in master that also breaks special return types and which will show up when you merge master into the fix branch. See my comment over there |
It seems that this "feature" is now broken in the latest master.
More precisely the line here
pennylane/pennylane/_qubit_device.py
Line 299 in 1a13bef
Would it be possible to have this line in a try/catch block and return either the plain result or a
dtype=object
array in caseresult
cannot be cast to float?The text was updated successfully, but these errors were encountered: