Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[unitaryHACK] Create a Pytorch simulator #1225 #1360

Merged
merged 210 commits into from
Aug 27, 2021
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
210 commits
Select commit Hold shift + click to select a range
bcdd7bf
add default torch
Slimane33 May 6, 2021
a3bff30
install plugin
Slimane33 May 6, 2021
9d0f8aa
basic circuits work
Slimane33 May 10, 2021
dd762b8
sampling for torch simulator
Slimane33 May 10, 2021
de5575d
convert array to torch tensor
Slimane33 May 10, 2021
b0ed9d0
enable backprop on expvalues
Slimane33 May 16, 2021
f2275d1
rewrite sample_basis_state
Slimane33 May 18, 2021
1e23c2d
cleaning
Slimane33 May 18, 2021
12e455a
add docstring operations
Slimane33 May 18, 2021
0e802ba
add controlphaseshift + multiRZ
Slimane33 May 18, 2021
3f952b1
add new operations from tf_ops
Slimane33 May 19, 2021
7e1eb51
stop check import
Slimane33 May 20, 2021
b7f68cb
update
Slimane33 May 23, 2021
7bff1f9
solve RZ gate
Slimane33 May 23, 2021
d03f75f
fixing all operations
Slimane33 May 23, 2021
3ab9ca0
check version compatibility
Slimane33 May 23, 2021
e81fba1
fix double exc gates
Slimane33 May 24, 2021
ed72557
main docstring
Slimane33 May 24, 2021
62584e2
docstring + cleaning
Slimane33 May 24, 2021
cdeeede
draft unit test
Slimane33 May 24, 2021
5768a3f
remove torch test
Slimane33 May 24, 2021
1f60f1c
Merge branch 'master' into pytorch-device
josh146 May 24, 2021
be6f3d5
Device's tests added (test_default_qubit_torch.py)
May 24, 2021
5c4b79d
Merge branch 'pytorch-device' of https://github.com/Slimane33/pennyla…
May 24, 2021
7fc758b
Merge branch 'pytorch-device' of https://github.com/Slimane33/pennyla…
PCesteban May 24, 2021
e0d7fdf
Merge branch 'pytorch-device' of https://github.com/Slimane33/pennyla…
PCesteban May 24, 2021
8b2e78d
correction
PCesteban May 24, 2021
5cb82a7
Some checks corrections
PCesteban May 24, 2021
65c5205
fix tests + autograd
Slimane33 May 24, 2021
6dea4fa
code factor
Slimane33 May 24, 2021
9174bcc
code factor
Slimane33 May 24, 2021
00db293
code factor
Slimane33 May 24, 2021
3950a54
Torch device test added (To check)
PCesteban May 24, 2021
59303c9
Merge branch 'master' into pytorch-device
josh146 May 25, 2021
cf6df7e
fix _asarray
Slimane33 May 25, 2021
b84c771
coverage report
Slimane33 May 25, 2021
5c33df8
Merge branch 'master' into pytorch-device
josh146 May 25, 2021
dea37a8
added torch device in passthru_devices
PCesteban May 25, 2021
542d95f
update test + passthru
Slimane33 May 25, 2021
cc182f3
enable inverse operation
Slimane33 May 25, 2021
376d448
correct inverse operation
Slimane33 May 25, 2021
aa1432d
update
Slimane33 May 26, 2021
ff53099
passing 147 test-cases
arshpreetsingh May 26, 2021
3d11745
Merge pull request #1 from arshpreetsingh/pytorch-device-unit-test
PCesteban May 26, 2021
f4877f5
suppress tf tests
Slimane33 May 26, 2021
8eff218
changed semantic version
charmerDark May 26, 2021
b237ea6
fix bug
Slimane33 May 26, 2021
4fb5eeb
fix bug
Slimane33 May 26, 2021
19b912c
Version in line 279
PCesteban May 26, 2021
246f9c0
testing for cude device
arshpreetsingh May 26, 2021
92289be
Update with suggested changes
PCesteban May 27, 2021
5eac43e
default_qubit_torch added to the documentation list
PCesteban May 27, 2021
8bef3cb
Merge branch 'pytorch-device' of https://github.com/Slimane33/pennylane
arshpreetsingh May 27, 2021
f042659
removing torch from autograd
arshpreetsingh May 27, 2021
c0bf8b1
Merge branch 'master' into pytorch-device
josh146 May 27, 2021
b5fbf55
removing cuda test
arshpreetsingh May 27, 2021
55868be
removing cuda test support
arshpreetsingh May 27, 2021
e27b9a1
Merge pull request #2 from arshpreetsingh/pytorch-device-unit-test
PCesteban May 27, 2021
62b5141
Update
PCesteban May 27, 2021
e3c3986
Update
PCesteban May 27, 2021
a9f8b80
rewrite _tensordot
Slimane33 May 27, 2021
d25545a
Merge branch 'master' into pytorch-device
Slimane33 May 27, 2021
e84e82b
solving PR reviews
arshpreetsingh May 27, 2021
780a7a0
code reformatting after: black -l 100 pennylane tests
arshpreetsingh May 27, 2021
aa57e0b
rmoved whitespaces
arshpreetsingh May 27, 2021
d6af59a
Merge branch 'pytorch-device' into pytorch-device-unit-test
Slimane33 May 27, 2021
19e498e
Merge pull request #3 from arshpreetsingh/pytorch-device-unit-test
Slimane33 May 27, 2021
ac75362
doc/requirements
Slimane33 May 27, 2021
972b1fd
doc/requirements
Slimane33 May 27, 2021
4f64eeb
conflict solved
PCesteban May 27, 2021
21daf78
suggested change in docstring
PCesteban May 27, 2021
f1a127a
Suggested change in docstring fixed
PCesteban May 27, 2021
d23f5fc
fix docstring default_qubit_torch
Slimane33 May 27, 2021
ecfc209
Merge branch 'master' into pytorch-device
mariaschuld May 27, 2021
9314aba
Update tests.yml
josh146 May 27, 2021
69985ef
Merge branch 'master' into pytorch-device
PCesteban May 27, 2021
c71d91f
Merge branch 'master' into pytorch-device
Slimane33 May 28, 2021
d39628f
fix cuda
Slimane33 May 28, 2021
73b51e5
fix cuda
Slimane33 May 28, 2021
4302b93
Merge branch 'master' into pytorch-device
albi3ro May 28, 2021
22a4f51
fix docs and CI
josh146 May 29, 2021
b51ae0f
fix docs and CI
josh146 May 29, 2021
cfd4dfd
fix cuda
Slimane33 May 29, 2021
9940a47
fix cuda in default.qubit instead of _qubit_device
Slimane33 May 29, 2021
124245e
fixed doc issue
arshpreetsingh May 30, 2021
fe6f214
pushed chages after black
arshpreetsingh May 30, 2021
393228c
fix version to 1.8.0 in torch.py
PCesteban May 31, 2021
e75188e
unremove test
josh146 May 31, 2021
76c23b0
Merge branch 'pytorch-device' of github.com:Slimane33/pennylane into …
josh146 May 31, 2021
b045f8e
black
josh146 May 31, 2021
9097567
suggested changes
PCesteban May 31, 2021
bff6b63
fix versions
PCesteban May 31, 2021
97ea50a
rewrite _apply_ops in torch + fix test
Slimane33 Jun 1, 2021
c5b83e3
delete comment
Slimane33 Jun 1, 2021
93d9c88
Merge branch 'master' into pytorch-device
Slimane33 Jun 10, 2021
03663cd
Merge branch 'master' into pytorch-device
PCesteban Jun 13, 2021
e2c0773
adding default.qubit.torch to conftest.py
PCesteban Jun 15, 2021
de49d0b
Merge branch 'master' into pytorch-device
PCesteban Jun 15, 2021
4a4094d
Merge branch 'master' into pytorch-device
PCesteban Jun 22, 2021
84bb566
Merge branch 'master' into pytorch-device
PCesteban Jun 27, 2021
fe65a27
Merge branch 'master' into pytorch-device
PCesteban Jul 1, 2021
8b442b8
bump pytorch version
josh146 Jul 2, 2021
cf16922
running black
josh146 Jul 2, 2021
256aa04
Merge branch 'master' into pytorch-device
co9olguy Jul 5, 2021
b4e07e9
Merge branch 'master' into pytorch-device
antalszava Jul 8, 2021
0db490e
Merge branch 'master' into pytorch-device
arshpreetsingh Jul 11, 2021
946b77f
fix _apply_state_vector + test_tape_torch
Slimane33 Jul 13, 2021
72d02a5
fix test_qnode_torch
Slimane33 Jul 13, 2021
26421d4
Merge branch 'master' into pytorch-device
antalszava Jul 19, 2021
f5406df
Merge branch 'master' into pytorch-device
antalszava Jul 20, 2021
819c8a9
added Ising operations
arshpreetsingh Jul 21, 2021
647a3e4
updated doc strings
arshpreetsingh Jul 21, 2021
87922dd
running black
arshpreetsingh Jul 21, 2021
684d0d1
Merge branch 'master' into pytorch-device
arshpreetsingh Jul 21, 2021
83fee40
Merge branch 'master' into pytorch-device
antalszava Jul 22, 2021
5168967
Update .github/workflows/tests.yml
antalszava Jul 23, 2021
edbc0e9
Merge branch 'master' into pytorch-device
arshpreetsingh Jul 23, 2021
be57772
Update default_qubit_torch.py
arshpreetsingh Jul 23, 2021
db12c2b
running black
arshpreetsingh Jul 23, 2021
577902d
Merge branch 'master' into pytorch-device
PCesteban Jul 23, 2021
14393b7
Merge branch 'master' into pytorch-device
antalszava Jul 26, 2021
70b30f1
Merge branch 'master' into pytorch-device
antalszava Jul 27, 2021
bfece02
Update tests/devices/test_default_qubit_torch.py
antalszava Aug 10, 2021
32cfc75
Update tests/devices/test_default_qubit_torch.py
PCesteban Aug 10, 2021
2f62245
Conflicts and code factor notes
PCesteban Aug 12, 2021
662b0e6
Delete qubit.py
PCesteban Aug 12, 2021
eab9f9d
Merge branch 'master' into pytorch-device
PCesteban Aug 12, 2021
ad74b74
Update tests/devices/test_default_qubit_torch.py
PCesteban Aug 12, 2021
28c0457
Update tests/devices/test_default_qubit_torch.py
antalszava Aug 13, 2021
c9b1ce6
Merge branch 'master' into pytorch-device
albi3ro Aug 13, 2021
5e33e1a
Part 1 of reformatting
albi3ro Aug 13, 2021
8082ea1
reverting formatting changes pt 2
albi3ro Aug 13, 2021
c626508
reverting formatting changes pt 3
albi3ro Aug 13, 2021
51edc0c
revert formatting changes pt 4
albi3ro Aug 13, 2021
c7b0851
revert formatting changes pt last
albi3ro Aug 13, 2021
a1793b9
fixing mistake
albi3ro Aug 13, 2021
7cc9b6b
Apply suggestions from code review
albi3ro Aug 13, 2021
984d4bd
Update tests/devices/test_default_qubit_torch.py
albi3ro Aug 13, 2021
d2432b3
remove some white space
albi3ro Aug 13, 2021
97319f0
update docstring example
albi3ro Aug 13, 2021
eb3cacd
Update tests/devices/test_default_qubit_torch.py
albi3ro Aug 13, 2021
a7f7f90
format
antalszava Aug 13, 2021
b0c8a23
autograd.py from master
antalszava Aug 13, 2021
53d5de5
autograd.py fixes
antalszava Aug 13, 2021
45a6f2c
Merge branch 'master' into pytorch-device
antalszava Aug 16, 2021
9b30c3b
Merge branch 'master' into pytorch-device
PCesteban Aug 16, 2021
0d0cbf5
Merge branch 'master' into pytorch-device
Slimane33 Aug 17, 2021
33c1fe0
Merge branch 'master' into pytorch-device
antalszava Aug 17, 2021
0058e6f
apply_state_vector fix
albi3ro Aug 17, 2021
63f7853
Merge branch 'master' into pytorch-device
albi3ro Aug 17, 2021
4fa4cb5
black
albi3ro Aug 17, 2021
a5f7f7e
Merge branch 'master' into pytorch-device
antalszava Aug 17, 2021
0e9fba5
improve tests, fix diagonal gate application
albi3ro Aug 19, 2021
9110268
Update pennylane/devices/default_qubit_torch.py
PCesteban Aug 21, 2021
1185952
Update pennylane/devices/default_qubit_torch.py
PCesteban Aug 21, 2021
3320119
finish polishing tests
albi3ro Aug 23, 2021
b5d973d
Merge branch 'master' into pytorch-device
albi3ro Aug 23, 2021
710fa6b
formatting, remove print statements
albi3ro Aug 23, 2021
bd5aa4c
Merge branch 'pytorch-device' of https://github.com/Slimane33/pennyla…
albi3ro Aug 23, 2021
ecf5ada
add diagonal inverse test
albi3ro Aug 23, 2021
f4c854e
Merge branch 'master' into pytorch-device
PCesteban Aug 23, 2021
62a1f16
changelog, black
albi3ro Aug 24, 2021
1bef7c3
Merge branch 'pytorch-device' of https://github.com/Slimane33/pennyla…
albi3ro Aug 24, 2021
a734ad2
style change
albi3ro Aug 24, 2021
c616e64
resolve
antalszava Aug 24, 2021
2f1a154
Update pennylane/devices/default_qubit_torch.py
antalszava Aug 24, 2021
7c7172e
Update pennylane/devices/default_qubit_torch.py
antalszava Aug 24, 2021
bbc08cf
error if sampling for shots=None
antalszava Aug 24, 2021
9c032de
Merge branch 'pytorch-device' of github.com:Slimane33/pennylane into …
antalszava Aug 24, 2021
9b69fc6
test error if sampling for shots=None
antalszava Aug 24, 2021
545cb08
torch test in device test properties test case
antalszava Aug 24, 2021
a67c595
Ising ops support and adjust
antalszava Aug 24, 2021
3e49056
Ising ops tests
antalszava Aug 24, 2021
f638f3e
remove commented tests
antalszava Aug 24, 2021
d7ed317
error import
antalszava Aug 24, 2021
1c05c94
format
antalszava Aug 24, 2021
eb2a49e
Update pennylane/devices/default_qubit_torch.py
antalszava Aug 24, 2021
84e2757
Update pennylane/devices/default_qubit_torch.py
antalszava Aug 24, 2021
77a57e9
Update pennylane/devices/default_qubit_torch.py
antalszava Aug 24, 2021
970516e
Update pennylane/devices/torch_ops.py
antalszava Aug 24, 2021
53f92e8
Update pennylane/devices/torch_ops.py
antalszava Aug 24, 2021
78f5c6d
Update tests/devices/test_default_qubit_torch.py
antalszava Aug 24, 2021
01a507f
Update tests/devices/test_default_qubit_torch.py
antalszava Aug 24, 2021
7b44a6b
Update tests/devices/test_default_qubit_torch.py
antalszava Aug 24, 2021
0bc0669
Update pennylane/devices/torch_ops.py
antalszava Aug 24, 2021
7e4fd9b
Update pennylane/devices/default_qubit_torch.py
antalszava Aug 24, 2021
e40c155
no warnings import
antalszava Aug 24, 2021
997fcfc
Merge branch 'pytorch-device' of github.com:Slimane33/pennylane into …
antalszava Aug 24, 2021
4aa7a73
format
antalszava Aug 24, 2021
74710dd
Merge branch 'master' into pytorch-device
antalszava Aug 24, 2021
915bb63
minor fixes
albi3ro Aug 24, 2021
aeb4d5d
Merge branch 'master' into pytorch-device
PCesteban Aug 24, 2021
81c8de6
docstring, sampling super, and test qchem ops
albi3ro Aug 25, 2021
716a7de
Merge branch 'master' into pytorch-device
albi3ro Aug 25, 2021
c014d63
remove unused import
albi3ro Aug 25, 2021
9681077
Merge branch 'pytorch-device' of https://github.com/Slimane33/pennyla…
albi3ro Aug 25, 2021
17a5793
Merge branch 'master' into pytorch-device
albi3ro Aug 25, 2021
81d31b9
Merge branch 'master' into pytorch-device
albi3ro Aug 26, 2021
ae70fd5
black
albi3ro Aug 26, 2021
32f2ae7
test double excitation gates
albi3ro Aug 26, 2021
4fe6381
black
albi3ro Aug 26, 2021
d5f662e
revert sampling change
albi3ro Aug 26, 2021
f2cd2b2
qml.quantumfunctionerror
albi3ro Aug 26, 2021
e4e0d3e
please!
albi3ro Aug 26, 2021
d4e7bb3
final black
albi3ro Aug 26, 2021
9ce7501
I swear i just did black
albi3ro Aug 26, 2021
ba3885a
Merge branch 'master' into pytorch-device
josh146 Aug 27, 2021
03a8bfb
Update pennylane/devices/default_qubit.py
josh146 Aug 27, 2021
7252743
fix
josh146 Aug 27, 2021
ad9a8b5
Merge branch 'master' into pytorch-device
josh146 Aug 27, 2021
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
12 changes: 8 additions & 4 deletions .github/CHANGELOG.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,6 +2,11 @@

<h3>New features since last release</h3>


* A new pytorch device, `qml.device('default.qubit.torch', wires=wires)`, supports
backpropogation with the torch interface.
[(#1225)](https://github.com/PennyLaneAI/pennylane/pull/1360)

* The ability to define *batch* transforms has been added via the new
`@qml.batch_transform` decorator.
[(#1493)](https://github.com/PennyLaneAI/pennylane/pull/1493)
Expand Down Expand Up @@ -364,10 +369,9 @@ and requirements-ci.txt (unpinned). This latter would be used by the CI.

This release contains contributions from (in alphabetical order):


Vishnu Ajith, Akash Narayanan B, Thomas Bromley, Tanya Garg, Josh Izaac, Prateek Jain, Johannes Jakob Meyer, Pratul Saini, Maria Schuld,
Ingrid Strandberg, David Wierichs, Vincent Wong.

Vishnu Ajith, Akash Narayanan B, Thomas Bromley, Tanya Garg, Josh Izaac, Prateek Jain, Christina Lee,
Johannes Jakob Meyer, Esteban Payares, Pratul Saini, Maria Schuld, Arshpreet Singh, Ingrid Strandberg,
Slimane Thabet, David Wierichs, Vincent Wong.

# Release 0.17.0 (current release)

Expand Down
2 changes: 2 additions & 0 deletions pennylane/devices/__init__.py
Original file line number Diff line number Diff line change
Expand Up @@ -24,11 +24,13 @@

default_qubit
default_qubit_jax
default_qubit_torch
default_qubit_tf
default_qubit_autograd
default_gaussian
default_mixed
tf_ops
torch_ops
autograd_ops
tests
"""
Expand Down
5 changes: 3 additions & 2 deletions pennylane/devices/default_qubit.py
Original file line number Diff line number Diff line change
Expand Up @@ -502,7 +502,7 @@ def expval(self, observable, shot_range=None, bin_size=None):
if isinstance(coeff, qml.numpy.tensor) and not coeff.requires_grad:
coeff = qml.math.toarray(coeff)

res = res + (
res = qml.math.convert_like(res, product) + (
qml.math.cast(qml.math.convert_like(coeff, product), "complex128") * product
)
return qml.math.real(res)
Expand Down Expand Up @@ -536,6 +536,7 @@ def capabilities(cls):
returns_state=True,
passthru_devices={
"tf": "default.qubit.tf",
"torch": "default.qubit.torch",
"autograd": "default.qubit.autograd",
"jax": "default.qubit.jax",
},
Expand Down Expand Up @@ -609,7 +610,7 @@ def _apply_state_vector(self, state, device_wires):
if state.ndim != 1 or n_state_vector != 2 ** len(device_wires):
raise ValueError("State vector must be of length 2**wires.")

if not np.allclose(np.linalg.norm(state, ord=2), 1.0, atol=tolerance):
if not qml.math.allclose(qml.math.linalg.norm(state, ord=2), 1.0, atol=tolerance):
raise ValueError("Sum of amplitudes-squared does not equal one.")

if len(device_wires) == self.num_wires and sorted(device_wires) == device_wires:
Expand Down
301 changes: 301 additions & 0 deletions pennylane/devices/default_qubit_torch.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,301 @@
# Copyright 2018-2021 Xanadu Quantum Technologies Inc.

# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at

# http://www.apache.org/licenses/LICENSE-2.0

# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""This module contains a PyTorch implementation of the :class:`~.DefaultQubit`
reference plugin.
"""
import semantic_version

try:
import torch

VERSION_SUPPORT = semantic_version.match(">=1.8.1", torch.__version__)
if not VERSION_SUPPORT:
raise ImportError("default.qubit.torch device requires Torch>=1.8.1")

except ImportError as e:
raise ImportError("default.qubit.torch device requires Torch>=1.8.1") from e

import numpy as np
from pennylane.operation import DiagonalOperation
from pennylane.devices import torch_ops
from . import DefaultQubit


class DefaultQubitTorch(DefaultQubit):
"""Simulator plugin based on ``"default.qubit"``, written using PyTorch.

**Short name:** ``default.qubit.torch``

This device provides a pure-state qubit simulator written using PyTorch.
As a result, it supports classical backpropagation as a means to compute the Jacobian. This can
be faster than the parameter-shift rule for analytic quantum gradients
when the number of parameters to be optimized is large.

To use this device, you will need to install PyTorch:

.. code-block:: console

pip install torch>=1.8.0

**Example**

The ``default.qubit.torch`` is designed to be used with end-to-end classical backpropagation
(``diff_method="backprop"``) and the PyTorch interface. This is the default method
of differentiation when creating a QNode with this device.

Using this method, the created QNode is a 'white-box', and is
tightly integrated with your PyTorch computation:

.. code-block:: python

dev = qml.device("default.qubit.torch", wires=1)

@qml.qnode(dev, interface="torch", diff_method="backprop")
def circuit(x):
qml.RX(x[1], wires=0)
qml.Rot(x[0], x[1], x[2], wires=0)
return qml.expval(qml.PauliZ(0))

>>> weights = torch.tensor([0.2, 0.5, 0.1], requires_grad=True)
>>> res = circuit(weights)
>>> res.backward()
>>> print(weights.grad)
tensor([-2.2527e-01, -1.0086e+00, 1.3878e-17])

Autograd mode will also work when using classical backpropagation:

>>> def cost(weights):
... return torch.sum(circuit(weights)**3) - 1
>>> res = circuit(weights)
>>> res.backward()
>>> print(weights.grad)
tensor([-4.5053e-01, -2.0173e+00, 5.9837e-17])

Executing the pipeline in PyTorch will allow the whole computation to be run on the GPU,
and therefore providing an acceleration. Your parameters need to be instantiated on the same
device as the backend device.

.. code-block:: python

dev = qml.device("default.qubit.torch", wires=1, torch_device='cuda')

@qml.qnode(dev, interface="torch", diff_method="backprop")
def circuit(x):
qml.RX(x[1], wires=0)
qml.Rot(x[0], x[1], x[2], wires=0)
return qml.expval(qml.PauliZ(0))

>>> weights = torch.tensor([0.2, 0.5, 0.1], requires_grad=True, device='cuda')
>>> res = circuit(weights)
>>> res.backward()
>>> print(weights.grad)
tensor([-2.2527e-01, -1.0086e+00, 1.3878e-17])


There are a couple of things to keep in mind when using the ``"backprop"``
differentiation method for QNodes:

* You must use the ``"torch"`` interface for classical backpropagation, as PyTorch is
used as the device backend.

* Only exact expectation values, variances, and probabilities are differentiable.
When instantiating the device with ``shots!=None``, differentiating QNode
outputs will result in ``None``.

If you wish to use a different machine-learning interface, or prefer to calculate quantum
gradients using the ``parameter-shift`` or ``finite-diff`` differentiation methods,
consider using the ``default.qubit`` device instead.

Args:
wires (int, Iterable): Number of subsystems represented by the device,
or iterable that contains unique labels for the subsystems. Default 1 if not specified.
shots (None, int): How many times the circuit should be evaluated (or sampled) to estimate
the expectation values. Defaults to ``None`` if not specified, which means
that the device returns analytical results.
If ``shots > 0`` is used, the ``diff_method="backprop"``
QNode differentiation method is not supported and it is recommended to consider
switching device to ``default.qubit`` and using ``diff_method="parameter-shift"``.
torch_device='cpu' (str): the device on which the computation will be run, ``'cpu'`` or ``'cuda'``
"""

name = "Default qubit (Torch) PennyLane plugin"
short_name = "default.qubit.torch"

parametric_ops = {
"PhaseShift": torch_ops.PhaseShift,
"ControlledPhaseShift": torch_ops.ControlledPhaseShift,
"RX": torch_ops.RX,
"RY": torch_ops.RY,
"RZ": torch_ops.RZ,
"MultiRZ": torch_ops.MultiRZ,
"Rot": torch_ops.Rot,
"CRX": torch_ops.CRX,
"CRY": torch_ops.CRY,
"CRZ": torch_ops.CRZ,
"CRot": torch_ops.CRot,
"IsingXX": torch_ops.IsingXX,
"IsingYY": torch_ops.IsingYY,
"IsingZZ": torch_ops.IsingZZ,
"SingleExcitation": torch_ops.SingleExcitation,
"SingleExcitationPlus": torch_ops.SingleExcitationPlus,
"SingleExcitationMinus": torch_ops.SingleExcitationMinus,
"DoubleExcitation": torch_ops.DoubleExcitation,
"DoubleExcitationPlus": torch_ops.DoubleExcitationPlus,
"DoubleExcitationMinus": torch_ops.DoubleExcitationMinus,
}

C_DTYPE = torch.complex128
R_DTYPE = torch.float64

_abs = staticmethod(torch.abs)
_einsum = staticmethod(torch.einsum)
_flatten = staticmethod(torch.flatten)
_reshape = staticmethod(torch.reshape)
_roll = staticmethod(torch.roll)
_stack = staticmethod(lambda arrs, axis=0, out=None: torch.stack(arrs, axis=axis, out=out))
_tensordot = staticmethod(
lambda a, b, axes: torch.tensordot(
a, b, axes if isinstance(axes, int) else tuple(map(list, axes))
)
)
_transpose = staticmethod(lambda a, axes=None: a.permute(*axes))
_asnumpy = staticmethod(lambda x: x.cpu().numpy())
_conj = staticmethod(torch.conj)
_imag = staticmethod(torch.imag)
_norm = staticmethod(torch.norm)
_flatten = staticmethod(torch.flatten)

def __init__(self, wires, *, shots=None, analytic=None, torch_device="cpu"):
self._torch_device = torch_device
super().__init__(wires, shots=shots, cache=0, analytic=analytic)

# Move state to torch device (e.g. CPU, GPU, XLA, ...)
self._state.requires_grad = True
self._state = self._state.to(self._torch_device)
self._pre_rotated_state = self._state

@staticmethod
def _asarray(a, dtype=None):
if isinstance(a, list):
# Handle unexpected cases where we don't have a list of tensors
if not isinstance(a[0], torch.Tensor):
res = np.asarray(a)
res = torch.from_numpy(res)
else:
res = torch.cat([torch.reshape(i, (-1,)) for i in a], dim=0)
res = torch.cat([torch.reshape(i, (-1,)) for i in res], dim=0)
Comment on lines +193 to +197
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hi @PCesteban, just wanted to make sure that the suggested updates in

aligned well with the intentions for this method. We've noticed that previously the device might output numpy arrays, instead of torch tensors (hence the suggestions from before). Let us know if we've missed a use case here.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hi @antalszava, I don't think there is any missed use case to the best of my knowledge. torch tensors are a better fit for this method.

FYI
@Slimane33 @arshpreetsingh

else:
res = torch.as_tensor(a, dtype=dtype)
return res

@staticmethod
def _dot(x, y):
if x.device != y.device:
if x.device != "cpu":
return torch.tensordot(x, y.to(x.device), dims=1)
if y.device != "cpu":
return torch.tensordot(x.to(y.device), y, dims=1)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is a GPU device fix, so we can't test it without having a GPU test.


return torch.tensordot(x, y, dims=1)

def _cast(self, a, dtype=None):
return torch.as_tensor(self._asarray(a, dtype=dtype), device=self._torch_device)

@staticmethod
def _reduce_sum(array, axes):
if not axes:
return array
return torch.sum(array, dim=axes)

@staticmethod
def _conj(array):
if isinstance(array, torch.Tensor):
return torch.conj(array)
return np.conj(array)

@staticmethod
def _scatter(indices, array, new_dimensions):

# `array` is now a torch tensor
tensor = array
new_tensor = torch.zeros(new_dimensions, dtype=tensor.dtype, device=tensor.device)
new_tensor[indices] = tensor
return new_tensor

@classmethod
def capabilities(cls):
capabilities = super().capabilities().copy()
capabilities.update(passthru_interface="torch", supports_reversible_diff=False)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

So the capabilities method does get tested, so I don't know what's up with code cov here.

Copy link
Contributor

@glassnotes glassnotes Aug 18, 2021

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@albi3ro we discovered yesterday that any instance of pass in the code gets ignored by codecov due to a line in the .coveragerc file. (Even in strings and arguments!) #1555 should fix this 🎉

return capabilities

def _get_unitary_matrix(self, unitary):
"""Return the matrix representing a unitary operation.

Args:
unitary (~.Operation): a PennyLane unitary operation

Returns:
torch.Tensor[complex]: Returns a 2D matrix representation of
the unitary in the computational basis, or, in the case of a diagonal unitary,
a 1D array representing the matrix diagonal.
"""
op_name = unitary.base_name
if op_name in self.parametric_ops:
if op_name == "MultiRZ":
mat = self.parametric_ops[op_name](
*unitary.parameters, len(unitary.wires), device=self._torch_device
)
else:
mat = self.parametric_ops[op_name](*unitary.parameters, device=self._torch_device)
if unitary.inverse:
if isinstance(unitary, DiagonalOperation):
mat = self._conj(mat)
else:
mat = self._transpose(self._conj(mat), axes=[1, 0])
return mat

if isinstance(unitary, DiagonalOperation):
return self._asarray(unitary.eigvals, dtype=self.C_DTYPE)
return self._asarray(unitary.matrix, dtype=self.C_DTYPE)

def sample_basis_states(self, number_of_states, state_probability):
"""Sample from the computational basis states based on the state
probability.

This is an auxiliary method to the ``generate_samples`` method.

Args:
number_of_states (int): the number of basis states to sample from
state_probability (torch.Tensor[float]): the computational basis probability vector

Returns:
List[int]: the sampled basis states
"""
return super().sample_basis_states(
number_of_states, state_probability.cpu().detach().numpy()
)

def _apply_operation(self, state, operation):
"""Applies operations to the input state.

Args:
state (torch.Tensor[complex]): input state
operation (~.Operation): operation to apply on the device

Returns:
torch.Tensor[complex]: output state
"""
if state.device != self._torch_device:
state = state.to(self._torch_device)
return super()._apply_operation(state, operation)
7 changes: 6 additions & 1 deletion pennylane/devices/tests/conftest.py
Original file line number Diff line number Diff line change
Expand Up @@ -33,7 +33,12 @@
# Number of shots to call the devices with
N_SHOTS = 1e6
# List of all devices that are included in PennyLane
LIST_CORE_DEVICES = {"default.qubit", "default.qubit.tf", "default.qubit.autograd"}
LIST_CORE_DEVICES = {
"default.qubit",
"default.qubit.torch",
"default.qubit.tf",
"default.qubit.autograd",
}


@pytest.fixture(scope="function")
Expand Down
Loading