Skip to content

Commit

Permalink
Merge branch 'master' into sparse_observable
Browse files Browse the repository at this point in the history
  • Loading branch information
josh146 committed Jun 28, 2021
2 parents bfbeaaf + 614db57 commit c97865d
Show file tree
Hide file tree
Showing 2 changed files with 18 additions and 1 deletion.
3 changes: 3 additions & 0 deletions .github/CHANGELOG.md
Original file line number Diff line number Diff line change
Expand Up @@ -11,6 +11,9 @@

<h3>Bug fixes</h3>

* Fixed a bug in the `torch` interface that prevented gradients from being
computed on a GPU [(#1426)](https://github.com/PennyLaneAI/pennylane/pull/1426).

* Quantum function transforms now preserve the format of the measurement
results, so that a single measurement returns a single value rather than
an array with a single element. [(#1434)](https://github.com/PennyLaneAI/pennylane/pull/1434/files)
Expand Down
16 changes: 15 additions & 1 deletion pennylane/interfaces/torch.py
Original file line number Diff line number Diff line change
Expand Up @@ -184,7 +184,21 @@ def backward(ctx_, ddy): # pragma: no cover
def backward(ctx, dy): # pragma: no cover
"""Implements the backwards pass QNode vector-Jacobian product"""
ctx.dy = dy
vjp = dy.view(1, -1) @ ctx.jacobian.apply(ctx, *ctx.saved_tensors)

dyv = dy.view(1, -1)
jac_res = ctx.jacobian.apply(ctx, *ctx.saved_tensors)

# When using CUDA, dyv seems to remain on the GPU, while the result
# of jac_res is returned on CPU, even though the saved_tensors arguments are
# themselves on the GPU. Check whether this has happened, and move things
# back to the GPU if required.
if dyv.is_cuda or jac_res.is_cuda:
if not dyv.is_cuda:
dyv = torch.as_tensor(dyv, device=jac_res.get_device())
if not jac_res.is_cuda:
jac_res = torch.as_tensor(jac_res, device=dyv.get_device())

vjp = dyv @ jac_res
vjp = torch.unbind(vjp.view(-1))
return (None,) + tuple(vjp)

Expand Down

0 comments on commit c97865d

Please sign in to comment.