Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Turn caching off by default when max_diff==1 #5243

Merged
merged 8 commits into from
Feb 23, 2024
Merged
Show file tree
Hide file tree
Changes from 2 commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
3 changes: 3 additions & 0 deletions doc/releases/changelog-dev.md
Original file line number Diff line number Diff line change
Expand Up @@ -299,6 +299,9 @@

<h3>Breaking changes 💔</h3>

* Caching of executions is now turned off by default when `max_diff == 1`, as the classical overhead cost
outweighs the probability that duplicate circuits exists.
albi3ro marked this conversation as resolved.
Show resolved Hide resolved

* The entry point convention registering compilers with PennyLane has changed.
[(#5140)](https://github.com/PennyLaneAI/pennylane/pull/5140)

Expand Down
8 changes: 5 additions & 3 deletions pennylane/workflow/qnode.py
Original file line number Diff line number Diff line change
Expand Up @@ -183,8 +183,9 @@ class QNode:
Only applies if the device is queried for the gradient; gradient transform
functions available in ``qml.gradients`` are only supported on the backward
pass. The 'best' option chooses automatically between the two options and is default.
cache (bool or dict or Cache): Whether to cache evaluations. This can result in
a significant reduction in quantum evaluations during gradient computations.
cache=None (None or bool or dict or Cache): Whether to cache evalulations.
albi3ro marked this conversation as resolved.
Show resolved Hide resolved
``None`` indicates to cache only when ``max_diff > 1``. This can result in
albi3ro marked this conversation as resolved.
Show resolved Hide resolved
a reduction in quantum evaluations during higher order gradient computations.
albi3ro marked this conversation as resolved.
Show resolved Hide resolved
If ``True``, a cache with corresponding ``cachesize`` is created for each batch
execution. If ``False``, no caching is used. You may also pass your own cache
to be used; this can be any object that implements the special methods
Expand Down Expand Up @@ -415,7 +416,7 @@ def __init__(
expansion_strategy="gradient",
max_expansion=10,
grad_on_execution="best",
cache=True,
cache=None,
albi3ro marked this conversation as resolved.
Show resolved Hide resolved
cachesize=10000,
max_diff=1,
device_vjp=False,
Expand Down Expand Up @@ -483,6 +484,7 @@ def __init__(
self.diff_method = diff_method
self.expansion_strategy = expansion_strategy
self.max_expansion = max_expansion
cache = (max_diff > 1) if cache is None else cache
albi3ro marked this conversation as resolved.
Show resolved Hide resolved

# execution keyword arguments
self.execute_kwargs = {
Expand Down
20 changes: 20 additions & 0 deletions tests/test_qnode.py
Original file line number Diff line number Diff line change
Expand Up @@ -77,6 +77,26 @@ def test_copy():
assert copied_qn.expansion_strategy == qn.expansion_strategy


class TestInitialization:
def test_cache_initialization_maxdiff_1(self):
"""Test that when max_diff = 1, the cache initializes to false."""

@qml.qnode(qml.device("default.qubit"), max_diff=1)
def f():
return qml.state()

assert f.execute_kwargs["cache"] is False

def test_cache_initialization_maxdiff_2(self):
"""Test that when max_diff = 2, the cache initialization to True."""

@qml.qnode(qml.device("default.qubit"), max_diff=2)
def f():
return qml.state()

assert f.execute_kwargs["cache"] is True


# pylint: disable=too-many-public-methods
class TestValidation:
"""Tests for QNode creation and validation"""
Expand Down
Loading