Skip to content

Commit

Permalink
Merge branch 'master' into ch7853-fix_grad_desc_mult_args
Browse files Browse the repository at this point in the history
  • Loading branch information
antalszava authored Aug 6, 2021
2 parents 675b7fc + c0cdff2 commit ad0eb38
Show file tree
Hide file tree
Showing 11 changed files with 1,902 additions and 38 deletions.
13 changes: 11 additions & 2 deletions .github/CHANGELOG.md
Original file line number Diff line number Diff line change
Expand Up @@ -75,10 +75,14 @@
* A new gradients module `qml.gradients` has been added, which provides
differentiable quantum gradient transforms.
[(#1476)](https://github.com/PennyLaneAI/pennylane/pull/1476)
[(#1479)](https://github.com/PennyLaneAI/pennylane/pull/1479)
[(#1486)](https://github.com/PennyLaneAI/pennylane/pull/1486)

Available quantum gradient transforms include:

- `qml.gradients.finite_diff`
- `qml.gradients.param_shift`
- `qml.gradients.param_shift_cv`

For example,

Expand Down Expand Up @@ -467,11 +471,16 @@
parameters raised an error.
[(#1495)](https://github.com/PennyLaneAI/pennylane/pull/1495)

* Fixes a bug where the adjoint of `qml.QFT` when using the `qml.adjoint` function
* Fixed an example in the documentation's
[introduction to numpy gradients](https://pennylane.readthedocs.io/en/stable/introduction/interfaces/numpy.html), where
the wires were a non-differentiable argument to the QNode.
[(#1499)](https://github.com/PennyLaneAI/pennylane/pull/1499)

* Fixed a bug where the adjoint of `qml.QFT` when using the `qml.adjoint` function
was not correctly computed.
[(#1451)](https://github.com/PennyLaneAI/pennylane/pull/1451)

* Fixes the differentiability of the operation`IsingYY` for Autograd, Jax and Tensorflow.
* Fixed the differentiability of the operation`IsingYY` for Autograd, Jax and Tensorflow.
[(#1425)](https://github.com/PennyLaneAI/pennylane/pull/1425)

* Fixed a bug in the `torch` interface that prevented gradients from being
Expand Down
15 changes: 15 additions & 0 deletions README.md
Original file line number Diff line number Diff line change
@@ -1,3 +1,18 @@
<p align="center">
<a href="https://pennylane.ai/blog/2021/07/pennylane-code-together/">
<img width=50% src="https://pennylane.ai/blog/images/code_together.jpg">
</a>
</p>

<p align="center">
<strong>Announcing <a href="https://pennylane.ai/blog/2021/07/pennylane-code-together/">PennyLane: Code Together</a>!
Join us on GitHub August 16th-27th, see <a href="https://github.com/PennyLaneAI/pennylane/blob/master/code_together.md">event FAQ here</a></strong>.
</p>

---



<p align="center">
<a href="https://pennylane.ai">
<img width=80% src="https://raw.githubusercontent.com/PennyLaneAI/pennylane/master/doc/_static/pennylane_thin.png">
Expand Down
47 changes: 47 additions & 0 deletions code_together.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,47 @@
### PennyLane: Code Together

Join us on GitHub August 16th-27th. Be the first to solve an open issue with
the "code together" label and win some awesome swag!

### PennyLane: Code Together FAQ

Q: What is PennyLane:Code Together? <br />
A: PennyLane:Code Together is an online event where you can win swag for solving
marked PennyLane.

Q: When and where does the Code Together take place? <br />
A: August 16th-27th, in the [PennyLane GitHub
repo](https://github.com/PennyLaneAI/pennylane).

Q: How can I participate in the Code Together? How do I get swag? <br />
A: Be the first to solve an open issue with the label "code together" in the
PennyLane GitHub repo. Every successful contributor gets digital and physical
swag.

Q: I'm new to PennyLane. Can I still participate? <br />
A: Definitely! You can have a look at any of the issues with the "code
together" label. It's worth noting that some issues have an extra bounty label
and are slightly more challenging. In case you have any questions about any of
the issues, feel free to comment on GitHub.

Q: Can I create my own Code Together issue? <br />
A: Yes! You can submit your own issue, just make sure to have the [Code
Together] prefix in the issue title. We will then evaluate the issue for
inclusion. If it receives the "code together" label, it will be part of the
event!

Q: Can a solution be created as a team? <br />
A: Yes, teams comprising more than one contributor can also submit solutions
to Code Together issues. It is worth noting that we will be able to give out
physical swag for a single team member.

Q: How will I get my physical swag? <br />
A: We'll contact successful contributors via email to ask for their addresses.
Then it's just a matter of time until the swag arrives!

Q: What if my questions are more involved? Is there any way I could discuss
them the PennyLane team? <br />
A: Yes! The PennyLane dev team will be available to discuss your Code Together
projects during our very first Community Call on August 19 at 11 am ET.
Furthermore, we'll be there throughout the Code Together to answer any of your
questions on GitHub.
38 changes: 24 additions & 14 deletions doc/introduction/interfaces/numpy.rst
Original file line number Diff line number Diff line change
Expand Up @@ -122,12 +122,12 @@ Differentiable and non-differentiable arguments
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

How does PennyLane know which arguments of a quantum function to differentiate, and which to ignore?
For example, you may want to pass arguments as positional arguments to a QNode but *not* have
For example, you may want to pass arguments to a QNode but *not* have
PennyLane consider them when computing gradients.

As a basic rule, **all positional arguments provided to the QNode are assumed to be differentiable
by default**. To accomplish this, all arrays created by the PennyLane NumPy module have a special
flag ``requires_grad`` specifying whether they are trainable or not:
by default**. To accomplish this, arguments are internally turned into arrays of the PennyLane NumPy module,
which have a special flag ``requires_grad`` specifying whether they are trainable or not:

>>> from pennylane import numpy as np
>>> np.array([0.1, 0.2])
Expand All @@ -146,8 +146,11 @@ tensor([0.1, 0.2], requires_grad=False)
The ``requires_grad`` argument can be passed to any NumPy function provided by PennyLane,
including NumPy functions that create arrays like ``np.random.random``, ``np.zeros``, etc.

An alternative way to avoid having positional arguments turned into differentiable PennyLane NumPy arrays is to
use a keyword argument syntax when the QNode is evaluated or when its gradient is computed.

For example, consider the following QNode that accepts one trainable argument ``weights``,
and two non-trainable arguments ``data`` and ``wires``:
and two non-differentiable arguments ``data`` and ``wires``:

.. code-block:: python
Expand All @@ -163,20 +166,28 @@ and two non-trainable arguments ``data`` and ``wires``:
qml.CNOT(wires=[wires[0], wires[2]])
return qml.expval(qml.PauliZ(wires[0]))
We must specify that ``data`` and ``wires`` are NumPy arrays with ``requires_grad=False``:
>>> weights = np.array([0.1, 0.2, 0.3])
For ``data``, which is a PennyLane NumPy array, we can simply specify ``requires_grad=False``:

>>> np.random.seed(42) # make the results reproducable
>>> data = np.random.random([2**3], requires_grad=False)
>>> wires = np.array([2, 0, 1], requires_grad=False)
>>> circuit(weights, data, wires)
0.16935626052294817

When we compute the derivative, arguments with ``requires_grad=False`` are explicitly ignored
by :func:`~.grad`:
But ``wires`` is a list in this example, and if we turn it into a PennyLane NumPy array we would have to
create a device that understands custom wire labels of this type.
It is much easier to use the second option laid out above, and pass ``wires`` to the
QNode using keyword argument syntax:

>>> wires = [2, 0, 1]
>>> weights = np.array([0.1, 0.2, 0.3])
>>> circuit(weights, data, wires=wires)
0.4124409353413991

When we compute the derivative, arguments with ``requires_grad=False`` as well as arguments passed as
keyword arguments are ignored by :func:`~.grad`:

>>> grad_fn = qml.grad(circuit)
>>> grad_fn(weights, data, wires)
(array([-1.69923049e-02, 0.00000000e+00, -8.32667268e-17]),)
>>> grad_fn(weights, data, wires=wires)
[-4.1382126e-02 0.0000000e+00 -6.9388939e-18]

.. note::

Expand All @@ -197,7 +208,6 @@ by :func:`~.grad`:
These arguments will always be treated as non-differentiable by the QNode and :func:`~.grad`
function.


Optimization
------------

Expand Down
2 changes: 2 additions & 0 deletions pennylane/gradients/__init__.py
Original file line number Diff line number Diff line change
Expand Up @@ -16,6 +16,8 @@

from . import finite_difference
from . import parameter_shift
from . import parameter_shift_cv

from .finite_difference import finite_diff, finite_diff_coeffs, generate_shifted_tapes
from .parameter_shift import param_shift
from .parameter_shift_cv import param_shift_cv
4 changes: 2 additions & 2 deletions pennylane/gradients/finite_difference.py
Original file line number Diff line number Diff line change
Expand Up @@ -161,7 +161,7 @@ def generate_shifted_tapes(tape, idx, shifts, multipliers=None):
``idx`` shifted by consecutive values of ``shift``. The length
of the returned list of tapes will match the length of ``shifts``.
"""
params = tape.get_parameters()
params = list(tape.get_parameters())
tapes = []

for i, s in enumerate(shifts):
Expand Down Expand Up @@ -300,7 +300,7 @@ def processing_fn(results):
# In the future, we might want to change this so that only tuples
# of arrays are returned.
for i, g in enumerate(grads):
g = qml.math.convert_like(g, res[0])
g = qml.math.convert_like(g, results[0])
if hasattr(g, "dtype") and g.dtype is np.dtype("object"):
grads[i] = qml.math.hstack(g)

Expand Down
Loading

0 comments on commit ad0eb38

Please sign in to comment.