Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Observables returning non-standard types now possible in tape mode but wrapped in a pennylane.numpy.tensor #1109

Closed
cvjjm opened this issue Feb 24, 2021 · 11 comments
Assignees

Comments

@cvjjm
Copy link
Contributor

cvjjm commented Feb 24, 2021

I have observed (with great delight!) that tape mode now allows observables returning non-standard types beyond floats, e.g., dicts or classes.

I however also noticed that QNodes returning the expval of an observable returning a class actually return a single element pennylane.numpy.tensor that contains the instance of the class returned by the observable. The result can then be accessed by means of calling .item() on the returned tensor, but I think it would be nicer if the QNode would return the result of the Observable directly, rather than wrapping it.

Is that possible?

@github-actions github-actions bot added the bug 🐛 Something isn't working label Feb 24, 2021
@mariaschuld
Copy link
Contributor

mariaschuld commented Feb 24, 2021

Hey @cvjjm, if I understand your question correctly, I think the answer is no, because this would break differentiability. If want the outputs of qnodes to be part of automatic differentiation pipelines, they need to be tensor objects recognised by the interface. For the numpy/autograd interface such differentiable objects are pennylane.numpy.tensors...

In the same sense, a torch function needs to take tensors as diffable arguments and return tensors, which is why one always has to use detach().numpy() - that is annoying, but crucial!

Would that make sense? I am not sure I understood the "observable returning a class" 100%...

@cvjjm
Copy link
Contributor Author

cvjjm commented Feb 24, 2021

I didn't explain the "observable returning an class" well. I have a custom observable class that is meant to be used with a custom device (I am aware that I am hacking PennyLane here and thus don't consider this a bug by the way). That device return an instance of a class from its .expval() function. In effect, a QNode ending in return qml.expval(MyObservable) when exectued on the custom device "should" return that instance. Instead it returns a one lement pennylane.numpy.tensor with my instance as the only .item().

For QNodes "returning a single float", i.e, such ending in return qml.expval(qml.PauliZ(wires=[0])) I also get a float back and not a pennylane.numpy.tensor` with a single element being a float and auto differentiation also works in that case.

Concerning auto differentiation: I don not necessarily expect auto differentiation to work with such a hack, but can imagine that as long as the class defines addition and multiplication with scalars (is the case in my case), even auto differentiation should be possible somehow...

@mariaschuld
Copy link
Contributor

mariaschuld commented Feb 24, 2021

Ah I see, thanks for clarifying.

As far as I know, at the moment wrapping is done for all outputs, even for floats:

import pennylane as qml

dev = qml.device("default.qubit", wires=1)

@qml.qnode(dev)
def circuit():
    qml.Hadamard(wires=0)
    return qml.expval(qml.PauliX(0) )

res = circuit()
print(res) # 0.9999999999999996
print(type(res)) # <class 'pennylane.numpy.tensor.tensor'>

(Can you verify that? Right now I can't see why the printed string only shows the data, but I assume it is some __repr__ or __str__ function up the hierarchy that does that...)

Of course, this makes sense for the normal way of using PennyLane. Your case is quite interesting, and indeed, something like this would not break the autodiff pipeline:

def process(a):
    res = circuit().numpy()
    return a*res

grad_fn = qml.grad(process)

param = np.array(5., requires_grad=True)
print(grad_fn(param)) # (array(1.),)

Just one last question, is extracting the .item() just a cosmetic issue, or does it stop you from doing something? I.e. could you code up a decorator for a qnode that does the unpacking automatically?

@cvjjm
Copy link
Contributor Author

cvjjm commented Feb 24, 2021

Ah! I was was fooled by the printing not showing the type! Indeed wrapping seems standard.

Having to use the .item() is just a cosmetic issue. But if wrapping is standard, then I think there is no need to change anything because then it is consistent and consistency is all I wanted :-)

@cvjjm
Copy link
Contributor Author

cvjjm commented Feb 24, 2021

Indeed I think that autodiff with autograd should work in my case with just one very minor change in PL:

diff --git a/pennylane/tape/tapes/jacobian_tape.py b/pennylane/tape/tapes/jacobian_tape.py
index 364cc969..0f8a79ce 100644
--- a/pennylane/tape/tapes/jacobian_tape.py
+++ b/pennylane/tape/tapes/jacobian_tape.py
@@ -541,7 +541,7 @@ class JacobianTape(QuantumTape):
                 # update the tape's output dimension
                 self._output_dim = len(g)
                 # create the Jacobian matrix
-                jac = np.zeros((len(g), len(params)), dtype=float)
+                jac = np.zeros((len(g), len(params)), dtype=g.dtype)

             jac[:, i] = g

Do you think this can be incorporated?

@mariaschuld
Copy link
Contributor

mariaschuld commented Feb 24, 2021

I would not see why not, internally we just need to make sure that the processing_fn functions produced by the differentiation rules are extra careful to return floats, so that the current behaviour is conserved - but this is something we should have control over.

Would you mind just opening a PR and we see if the tests pass?

Happy the output issue is less severe than anticipated!

@antalszava antalszava removed the bug 🐛 Something isn't working label Feb 24, 2021
@josh146
Copy link
Member

josh146 commented Feb 25, 2021

I have observed (with great delight!) that tape mode now allows observables returning non-standard types beyond floats, e.g., dicts or classes.

I am also pleasantly surprised, this was not something we intended/expected when coding it 😆 Glad to know it works!

We should probably add tests for this behaviour, to make sure we don't accidentally break it.

@cvjjm
Copy link
Contributor Author

cvjjm commented Feb 25, 2021

I will test this some more and open an PR once I finally get official permission to contribute to open source projects under Apache 2.0.

@trbromley
Copy link
Contributor

Thanks @cvjjm! Looking forward to the PR, let us know if we can help with anything.

@cvjjm
Copy link
Contributor Author

cvjjm commented May 17, 2021

PR is here #1291

@cvjjm cvjjm closed this as completed May 17, 2021
@josh146
Copy link
Member

josh146 commented May 18, 2021

Thanks @cvjjm!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

5 participants