Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Remove deprecated old transform functionalities #5339

Merged
merged 14 commits into from
Mar 14, 2024
17 changes: 9 additions & 8 deletions doc/development/deprecations.rst
Original file line number Diff line number Diff line change
Expand Up @@ -43,14 +43,6 @@ Pending deprecations
- Deprecated in v0.35
- Will raise an error in v0.36

* ``single_tape_transform``, ``batch_transform``, ``qfunc_transform``, and ``op_transform`` are
deprecated. Instead switch to using the new ``qml.transform`` function. Please refer to
`the transform docs <https://docs.pennylane.ai/en/stable/code/qml_transforms.html#custom-transforms>`_
to see how this can be done.

- Deprecated in v0.34
- Will be removed in v0.36

* ``PauliWord`` and ``PauliSentence`` no longer use ``*`` for matrix and tensor products,
but instead use ``@`` to conform with the PennyLane convention.

Expand All @@ -64,6 +56,15 @@ Pending deprecations
Completed deprecation cycles
----------------------------

* ``single_tape_transform``, ``batch_transform``, ``qfunc_transform``, ``op_transform``,
`` gradient_transform`` and ``hessian_transform`` are deprecated. Instead switch to using the new
astralcai marked this conversation as resolved.
Show resolved Hide resolved
``qml.transform`` function. Please refer to
`the transform docs <https://docs.pennylane.ai/en/stable/code/qml_transforms.html#custom-transforms>`_
to see how this can be done.

- Deprecated in v0.34
- Removed in v0.36

* ``MeasurementProcess.name`` and ``MeasurementProcess.data`` have been deprecated, as they contain
dummy values that are no longer needed.

Expand Down
6 changes: 6 additions & 0 deletions doc/releases/changelog-dev.md
Original file line number Diff line number Diff line change
Expand Up @@ -62,6 +62,12 @@
* The contents of ``qml.interfaces`` is moved inside ``qml.workflow``. The old import path no longer exists.
[(#5329)](https://github.com/PennyLaneAI/pennylane/pull/5329)

* ``single_tape_transform``, ``batch_transform``, ``qfunc_transform``, ``op_transform``, `` gradient_transform``
and ``hessian_transform`` are removed. Instead, switch to using the new ``qml.transform`` function. Please refer to
`the transform docs <https://docs.pennylane.ai/en/stable/code/qml_transforms.html#custom-transforms>`_
to see how this can be done.
astralcai marked this conversation as resolved.
Show resolved Hide resolved
[(#5339)](https://github.com/PennyLaneAI/pennylane/pull/5339)

<h3>Deprecations 👋</h3>

* ``qml.load`` is deprecated. Instead, please use the functions outlined in the *Importing workflows* quickstart guide, such as ``qml.from_qiskit``.
Expand Down
3 changes: 0 additions & 3 deletions pennylane/__init__.py
Original file line number Diff line number Diff line change
Expand Up @@ -90,9 +90,6 @@
compile,
defer_measurements,
dynamic_one_shot,
qfunc_transform,
op_transform,
single_tape_transform,
quantum_monte_carlo,
apply_controlled_Q,
commutation_dag,
Expand Down
3 changes: 1 addition & 2 deletions pennylane/gradients/__init__.py
Original file line number Diff line number Diff line change
Expand Up @@ -338,8 +338,7 @@ def my_custom_gradient(tape: qml.tape.QuantumTape, **kwargs) -> (Sequence[qml.ta
from . import pulse_gradient
from . import pulse_gradient_odegen

from .gradient_transform import gradient_transform, SUPPORTED_GRADIENT_KWARGS
from .hessian_transform import hessian_transform
from .gradient_transform import SUPPORTED_GRADIENT_KWARGS
from .finite_difference import finite_diff, finite_diff_coeffs
from .parameter_shift import param_shift
from .parameter_shift_cv import param_shift_cv
Expand Down
154 changes: 0 additions & 154 deletions pennylane/gradients/gradient_transform.py
Original file line number Diff line number Diff line change
Expand Up @@ -18,7 +18,6 @@
import warnings

import pennylane as qml
from pennylane.transforms.tape_expand import expand_invalid_trainable
from pennylane.measurements import (
MutualInfoMP,
StateMP,
Expand Down Expand Up @@ -441,156 +440,3 @@ def _reshape(x):
if not cjac_is_tuple:
return tuple(tdot(qml.math.stack(q), qml.math.stack(cjac)) for q in qjac)
return tuple(tuple(tdot(qml.math.stack(q), c) for c in cjac if c is not None) for q in qjac)


class gradient_transform(qml.batch_transform): # pragma: no cover
"""Decorator for defining quantum gradient transforms.

Quantum gradient transforms are a specific case of :class:`~.batch_transform`.
All quantum gradient transforms accept a tape, and output
a batch of tapes to be independently executed on a quantum device, alongside
a post-processing function that returns the result.

Args:
expand_fn (function): An expansion function (if required) to be applied to the
input tape before the gradient computation takes place. If not provided,
the default expansion function simply expands all operations that
have ``Operation.grad_method=None`` until all resulting operations
have a defined gradient method.
differentiable (bool): Specifies whether the gradient transform is differentiable or
not. A transform may be non-differentiable if it does not use an
autodiff framework for its tensor manipulations. In such a case, setting
``differentiable=False`` instructs the decorator
to mark the output as 'constant', reducing potential overhead.
hybrid (bool): Specifies whether classical processing inside a QNode
should be taken into account when transforming a QNode.

- If ``True``, and classical processing is detected and this
option is set to ``True``, the Jacobian of the classical
processing will be computed and included. When evaluated, the
returned Jacobian will be with respect to the QNode arguments.

- If ``False``, any internal QNode classical processing will be
**ignored**. When evaluated, the returned Jacobian will be with
respect to the **gate** arguments, and not the QNode arguments.

Supported gradient transforms must be of the following form:

.. code-block:: python

@gradient_transform
def my_custom_gradient(tape, argnum=None, **kwargs):
...
return gradient_tapes, processing_fn

where:

- ``tape`` (*QuantumTape*): the input quantum tape to compute the gradient of

- ``argnum`` (*int* or *list[int]* or *None*): Which trainable parameters of the tape
to differentiate with respect to. If not provided, the derivatives with respect to all
trainable inputs of the tape should be returned (``tape.trainable_params``).

- ``gradient_tapes`` (*list[QuantumTape]*): is a list of output tapes to be evaluated.
If this list is empty, no quantum evaluations will be made.

- ``processing_fn`` is a processing function to be applied to the output of the evaluated
``gradient_tapes``. It should accept a list of numeric results with length ``len(gradient_tapes)``,
and return the Jacobian matrix.

Once defined, the quantum gradient transform can be used as follows:

>>> gradient_tapes, processing_fn = my_custom_gradient(tape, *gradient_kwargs)
>>> res = execute(tapes, dev, interface="autograd", gradient_fn=qml.gradients.param_shift)
>>> jacobian = processing_fn(res)

Alternatively, gradient transforms can be applied directly to QNodes,
in which case the execution is implicit:

>>> fn = my_custom_gradient(qnode, *gradient_kwargs)
>>> fn(weights) # transformed function takes the same arguments as the QNode
1.2629730888100839

.. note::

The input tape might have parameters of various types, including
NumPy arrays, JAX Arrays, and TensorFlow and PyTorch tensors.

If the gradient transform is written in a autodiff-compatible manner, either by
using a framework such as Autograd or TensorFlow, or by using ``qml.math`` for
tensor manipulation, then higher-order derivatives will also be supported.

Alternatively, you may use the ``tape.unwrap()`` context manager to temporarily
convert all tape parameters to NumPy arrays and floats:

>>> with tape.unwrap():
... params = tape.get_parameters() # list of floats
"""

def __repr__(self):
return f"<gradient_transform: {self.__name__}>" # pylint: disable=no-member

def __init__(
self, transform_fn, expand_fn=expand_invalid_trainable, differentiable=True, hybrid=True
):
self.hybrid = hybrid
super().__init__(transform_fn, expand_fn=expand_fn, differentiable=differentiable)

def default_qnode_wrapper(self, qnode, targs, tkwargs): # pylint: disable=too-many-statements
# Here, we overwrite the QNode execution wrapper in order
# to take into account that classical processing may be present
# inside the QNode.
hybrid = tkwargs.pop("hybrid", self.hybrid)
_wrapper = super().default_qnode_wrapper(qnode, targs, tkwargs)

def jacobian_wrapper(
*args, **kwargs
): # pylint: disable=too-many-return-statements, too-many-branches, too-many-statements
argnums = tkwargs.get("argnums", None)

interface = qml.math.get_interface(*args)
trainable_params = qml.math.get_trainable_indices(args)

if interface == "jax" and tkwargs.get("argnum", None):
raise qml.QuantumFunctionError(
"argnum does not work with the Jax interface. You should use argnums instead."
)

if interface == "jax" and not trainable_params:
if argnums is None:
argnums_ = [0]

else:
argnums_ = [argnums] if isinstance(argnums, int) else argnums

params = qml.math.jax_argnums_to_tape_trainable(
qnode, argnums_, self.expand_fn, args, kwargs
)
argnums_ = qml.math.get_trainable_indices(params)
kwargs["argnums"] = argnums_

elif not trainable_params:
warnings.warn(
"Attempted to compute the gradient of a QNode with no trainable parameters. "
"If this is unintended, please add trainable parameters in accordance with "
"the chosen auto differentiation framework."
)
return ()

qjac = _wrapper(*args, **kwargs)

if not hybrid:
return qjac

kwargs.pop("shots", False)

# Special case where we apply a Jax transform (jacobian e.g.) on the gradient transform and argnums are
# defined on the outer transform and therefore on the args.
argnum_cjac = trainable_params or argnums if interface == "jax" else None
cjac = qml.gradients.classical_jacobian(
qnode, argnum=argnum_cjac, expand_fn=self.expand_fn
)(*args, **kwargs)

return _contract_qjac_with_cjac(qjac, cjac, qnode.tape) # pragma: no cover

return jacobian_wrapper
Loading
Loading