-
Notifications
You must be signed in to change notification settings - Fork 603
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Repeated applications of the torch interface result in an error when using templates #1210
Comments
@mariaschuld, adding you since this is related to the template-to-ops refactor. Although, I wonder if this is a problem more generally for operations that are expanded because of not being supported on the device 🤔 |
Once this is fixed, it would be good to revert the changes of PennyLaneAI/qml#247. |
Interesting... I checked the following three differences between templates (which are now operations) and "original" operations, but none seems to make a difference:
It must be a very strange edge case. |
Removing the QNode, to make the non-working example even more minimal: import torch
import pennylane as qml
from pennylane.interfaces.torch import TorchInterface
n_qubits = 2
dev = qml.device("default.qubit", wires=n_qubits)
weights = torch.ones((3,))
with TorchInterface.apply(qml.tape.QuantumTape()) as tape:
qml.U3(*weights, wires=0)
qml.expval(qml.PauliZ(wires=0))
tape = tape.expand()
res = tape.execute(dev)
print(res)
TorchInterface.apply(tape) # this line errors with the same error message as above
res = tape.execute(dev)
print(res) |
I solved the issue in #1223 - it turns out this wasn't related to the templates refactor, the templates refactor just caused this edge case to occur 🙂 Applying the torch interface twice would error with any expanded/decomposed operation. |
Consider the following code:
Here, we are creating a QNode that contains
BasicEntanglerLayers
- an operation that was previously a tape (changed in #1138). We then evaluate the QNode usingqnode(weights)
, which will cause the contained quantum tape to be converted to the Torch interface. Finally, we convert again to the torch interface in the last line. This results in the error:Instead, if we were to use a different circuit without templates, the error does not appear:
Also, the circuit with
BasicEntanglerLayers
does not cause an error before #1138 was merged in. Hence, the migration from templates to operations has introduced this issue.Could this be to do with the use of
expand()
by the templates? Theexpand()
method returns a quantum tape, do we have to be careful about the interface?Additional information
This issue was identified due to failing tests for the TorchLayer tutorial. This tutorial creates multiple
TorchLayer
s from the same QNode, resulting into_torch()
being called multiple times. We don't typically expectto_torch()
to be called explicitly by users.The text was updated successfully, but these errors were encountered: