-
Notifications
You must be signed in to change notification settings - Fork 603
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Observables returning non-standard types now possible in tape mode but wrapped in a pennylane.numpy.tensor #1109
Comments
Hey @cvjjm, if I understand your question correctly, I think the answer is no, because this would break differentiability. If want the outputs of qnodes to be part of automatic differentiation pipelines, they need to be tensor objects recognised by the interface. For the numpy/autograd interface such differentiable objects are In the same sense, a torch function needs to take tensors as diffable arguments and return tensors, which is why one always has to use Would that make sense? I am not sure I understood the "observable returning a class" 100%... |
I didn't explain the "observable returning an class" well. I have a custom observable class that is meant to be used with a custom device (I am aware that I am hacking PennyLane here and thus don't consider this a bug by the way). That device return an instance of a class from its For QNodes "returning a single float", i.e, such ending in Concerning auto differentiation: I don not necessarily expect auto differentiation to work with such a hack, but can imagine that as long as the class defines addition and multiplication with scalars (is the case in my case), even auto differentiation should be possible somehow... |
Ah I see, thanks for clarifying. As far as I know, at the moment wrapping is done for all outputs, even for floats: import pennylane as qml
dev = qml.device("default.qubit", wires=1)
@qml.qnode(dev)
def circuit():
qml.Hadamard(wires=0)
return qml.expval(qml.PauliX(0) )
res = circuit()
print(res) # 0.9999999999999996
print(type(res)) # <class 'pennylane.numpy.tensor.tensor'> (Can you verify that? Right now I can't see why the printed string only shows the data, but I assume it is some Of course, this makes sense for the normal way of using PennyLane. Your case is quite interesting, and indeed, something like this would not break the autodiff pipeline: def process(a):
res = circuit().numpy()
return a*res
grad_fn = qml.grad(process)
param = np.array(5., requires_grad=True)
print(grad_fn(param)) # (array(1.),) Just one last question, is extracting the |
Ah! I was was fooled by the printing not showing the type! Indeed wrapping seems standard. Having to use the |
Indeed I think that autodiff with autograd should work in my case with just one very minor change in PL:
Do you think this can be incorporated? |
I would not see why not, internally we just need to make sure that the Would you mind just opening a PR and we see if the tests pass? Happy the output issue is less severe than anticipated! |
I am also pleasantly surprised, this was not something we intended/expected when coding it 😆 Glad to know it works! We should probably add tests for this behaviour, to make sure we don't accidentally break it. |
I will test this some more and open an PR once I finally get official permission to contribute to open source projects under Apache 2.0. |
Thanks @cvjjm! Looking forward to the PR, let us know if we can help with anything. |
PR is here #1291 |
Thanks @cvjjm! |
I have observed (with great delight!) that tape mode now allows observables returning non-standard types beyond floats, e.g., dicts or classes.
I however also noticed that QNodes returning the expval of an observable returning a class actually return a single element
pennylane.numpy.tensor
that contains the instance of the class returned by the observable. The result can then be accessed by means of calling.item()
on the returned tensor, but I think it would be nicer if the QNode would return the result of the Observable directly, rather than wrapping it.Is that possible?
The text was updated successfully, but these errors were encountered: