Skip to content

Commit

Permalink
Merge rc to master (#1837)
Browse files Browse the repository at this point in the history
* extend the changelog item of the init module deprecation (#1833)

* add sorting to `get_parameters`

* undo accidental direct push to v0.19.0-rc0

* Remove template decorator from PennyLane functions (#1808) (#1835)

* Remove template decorator.

* Correct import and changelog.

* Change interferometer.

* Update pennylane/templates/broadcast.py

Co-authored-by: antalszava <antalszava@gmail.com>

* Update doc/releases/changelog-dev.md

Co-authored-by: antalszava <antalszava@gmail.com>

* Update pennylane/templates/broadcast.py

Co-authored-by: antalszava <antalszava@gmail.com>

* Update from review.

* More import removed.

* Update parametric ops.

* Update pennylane/templates/subroutines/arbitrary_unitary.py

Co-authored-by: antalszava <antalszava@gmail.com>

Co-authored-by: antalszava <antalszava@gmail.com>

Co-authored-by: Romain <rmoyard@gmail.com>

* Update `QNGOptimizer` to handle deprecations (#1834)

* pass approx option in QNG if need be

* convert to array instead of list; add recwarn to QNG test case

* Update pennylane/optimize/qng.py

Co-authored-by: Josh Izaac <josh146@gmail.com>

* add expval(H) with two input params test case

* deprecate diag_approx keyword for QNGOptimizer

* format

* docstring

* changelog

Co-authored-by: Josh Izaac <josh146@gmail.com>

Co-authored-by: dwierichs <davidwierichs@gmail.com>
Co-authored-by: Romain <rmoyard@gmail.com>
Co-authored-by: Josh Izaac <josh146@gmail.com>
  • Loading branch information
4 people committed Nov 2, 2021
1 parent 48aa626 commit ee151b0
Show file tree
Hide file tree
Showing 3 changed files with 115 additions and 14 deletions.
34 changes: 28 additions & 6 deletions doc/releases/changelog-0.19.0.md
Original file line number Diff line number Diff line change
Expand Up @@ -625,6 +625,10 @@

<h3>Improvements</h3>

* Updated the `qml.QNGOptimizer.step_and_cost` method to avoid the use of
deprecated functionality.
[(#1834)](https://github.com/PennyLaneAI/pennylane/pull/1834)

* The default for an `Operation`'s `control_wires` attribute is now an empty `Wires`
object instead of the attribute raising a `NonImplementedError`.
[(#1821)](https://github.com/PennyLaneAI/pennylane/pull/1821)
Expand Down Expand Up @@ -891,12 +895,6 @@

If `hybrid=False`, the changed expansion rule might lead to a changed output.

* The `qml.metric_tensor` keyword argument `diag_approx` is deprecated.
Approximations can be controlled with the more fine-grained `approx`
keyword argument, with `approx="block-diag"` (the default) reproducing
the old behaviour.
[(#1721)](https://github.com/PennyLaneAI/pennylane/pull/1721)

* The `default.qubit.torch` device automatically determines if computations
should be run on a CPU or a GPU and doesn't take a `torch_device` argument
anymore.
Expand All @@ -923,6 +921,14 @@

<h3>Deprecations</h3>

* The `qml.metric_tensor` and `qml.QNGOptimizer` keyword argument `diag_approx`
is deprecated.
Approximations can be controlled with the more fine-grained `approx` keyword
argument, with `approx="block-diag"` (the default) reproducing the old
behaviour.
[(#1721)](https://github.com/PennyLaneAI/pennylane/pull/1721)
[(#1834)](https://github.com/PennyLaneAI/pennylane/pull/1834)

* The `template` decorator is now deprecated with a warning message and will be removed
in release `v0.20.0`. It has been removed from different PennyLane functions.
[(#1794)](https://github.com/PennyLaneAI/pennylane/pull/1794)
Expand Down Expand Up @@ -997,6 +1003,22 @@
which can then be generated manually.
[(#1689)](https://github.com/PennyLaneAI/pennylane/pull/1689)

To generate the parameter tensors, the `np.random.normal` and
`np.random.uniform` functions can be used (just like in the `init` module).
Considering the default arguments of these functions as of NumPy v1.21, some
non-default options were used by the `init` module:

* All functions generating normally distributed parameters used
`np.random.normal` by passing `scale=0.1`;

* Most functions generating uniformly distributed parameters (except for
certain CVQNN initializers) used `np.random.uniform` by passing
`high=2*math.pi`;

* The `cvqnn_layers_r_uniform`, `cvqnn_layers_a_uniform`,
`cvqnn_layers_kappa_uniform` functions used `np.random.uniform` by passing
`high=0.1`.

* The `QNode.draw` method has been deprecated, and will be removed in an upcoming release.
Please use the `qml.draw` transform instead.
[(#1746)](https://github.com/PennyLaneAI/pennylane/pull/1746)
Expand Down
26 changes: 21 additions & 5 deletions pennylane/optimize/qng.py
Original file line number Diff line number Diff line change
Expand Up @@ -14,8 +14,9 @@
"""Quantum natural gradient optimizer"""
# pylint: disable=too-many-branches
# pylint: disable=too-many-arguments
import warnings

import numpy as np
from pennylane import numpy as np

import pennylane as qml
from pennylane.utils import _flatten, unflatten
Expand Down Expand Up @@ -150,9 +151,23 @@ class QNGOptimizer(GradientDescentOptimizer):
to be applied at each optimization step
"""

def __init__(self, stepsize=0.01, diag_approx=False, lam=0):
def __init__(self, stepsize=0.01, approx="block-diag", diag_approx=None, lam=0):
super().__init__(stepsize)
self.diag_approx = diag_approx

approx_set = False
if diag_approx is not None:

warnings.warn(
"The keyword argument diag_approx is deprecated. Please use approx='diag' instead.",
UserWarning,
)
if diag_approx:
self.approx = "diag"
approx_set = True

if not approx_set:
self.approx = approx

self.metric_tensor = None
self.lam = lam

Expand Down Expand Up @@ -192,13 +207,14 @@ def step_and_cost(

if recompute_tensor or self.metric_tensor is None:
if metric_tensor_fn is None:
metric_tensor_fn = qml.metric_tensor(qnode, diag_approx=self.diag_approx)

metric_tensor_fn = qml.metric_tensor(qnode, approx=self.approx)

self.metric_tensor = metric_tensor_fn(*args, **kwargs)
self.metric_tensor += self.lam * np.identity(self.metric_tensor.shape[0])

g, forward = self.compute_grad(qnode, args, kwargs, grad_fn=grad_fn)
new_args = self.apply_grad(g, args)
new_args = np.array(self.apply_grad(g, args), requires_grad=True)

if forward is None:
forward = qnode(*args, **kwargs)
Expand Down
69 changes: 66 additions & 3 deletions tests/optimize/test_qng.py
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
# Copyright 2018-2020 Xanadu Quantum Technologies Inc.
# Copyright 2018-2021 Xanadu Quantum Technologies Inc.

# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
Expand All @@ -14,6 +14,7 @@
"""Tests for the QNG optimizer"""
import pytest
import scipy as sp
import warnings

import pennylane as qml
from pennylane import numpy as np
Expand Down Expand Up @@ -228,7 +229,7 @@ def gradient(params):
# check final cost
assert np.allclose(cost_fn(theta), -1.41421356, atol=tol, rtol=0)

def test_single_qubit_vqe_using_vqecost(self, tol):
def test_single_qubit_vqe_using_vqecost(self, tol, recwarn):
"""Test single-qubit VQE using ExpvalCost
has the correct QNG value every step, the correct parameter updates,
and correct cost after 200 steps"""
Expand All @@ -252,7 +253,7 @@ def gradient(params):
return np.array([da, db])

eta = 0.01
init_params = np.array([0.011, 0.012])
init_params = np.array([0.011, 0.012], requires_grad=True)
num_steps = 200

opt = qml.QNGOptimizer(eta)
Expand All @@ -275,3 +276,65 @@ def gradient(params):

# check final cost
assert np.allclose(cost_fn(theta), -1.41421356, atol=tol, rtol=0)
assert len(recwarn) == 0

def test_single_qubit_vqe_using_expval_h_multiple_input_params(self, tol, recwarn):
"""Test single-qubit VQE by returning qml.expval(H) in the QNode and
check for the correct QNG value every step, the correct parameter updates, and
correct cost after 200 steps"""
dev = qml.device("default.qubit", wires=1)
coeffs = [1, 1]
obs_list = [qml.PauliX(0), qml.PauliZ(0)]

H = qml.Hamiltonian(coeffs=coeffs, observables=obs_list)

@qml.qnode(dev)
def circuit(x, y, wires=0):
qml.RX(x, wires=wires)
qml.RY(y, wires=wires)
return qml.expval(H)

eta = 0.01
x = np.array(0.011, requires_grad=True)
y = np.array(0.022, requires_grad=True)

def gradient(params):
"""Returns the gradient"""
da = -np.sin(params[0]) * (np.cos(params[1]) + np.sin(params[1]))
db = np.cos(params[0]) * (np.cos(params[1]) - np.sin(params[1]))
return np.array([da, db])

eta = 0.01
num_steps = 200

opt = qml.QNGOptimizer(eta)

# optimization for 200 steps total
for t in range(num_steps):
theta = np.array([x, y])
x, y = opt.step(circuit, x, y)

# check metric tensor
res = opt.metric_tensor
exp = np.diag([0.25, (np.cos(x) ** 2) / 4])
assert np.allclose(res, exp, atol=0.00001, rtol=0)

# check parameter update
theta_new = np.array([x, y])
dtheta = eta * sp.linalg.pinvh(exp) @ gradient(theta)
assert np.allclose(dtheta, theta - theta_new, atol=0.000001, rtol=0)

# check final cost
assert np.allclose(circuit(x, y), -1.41421356, atol=tol, rtol=0)
assert len(recwarn) == 0

@pytest.mark.parametrize(
"diag_approx, approx_expected", [(True, "diag"), (False, "block-diag")]
)
def test_deprecate_diag_approx(self, diag_approx, approx_expected):
"""Test that using the diag_approx argument raises a warning due to
deprecation."""
with pytest.warns(UserWarning, match="keyword argument diag_approx is deprecated"):
opt = qml.QNGOptimizer(0.1, diag_approx=True)

assert opt.approx == "diag"

0 comments on commit ee151b0

Please sign in to comment.