Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Upgrade null.qubit to the new device API #5211

Merged
merged 30 commits into from
Mar 4, 2024
Merged

Conversation

timmysilv
Copy link
Contributor

@timmysilv timmysilv commented Feb 14, 2024

Context:
null.qubit doesn't quite do everything as it should, so it needs an upgrade of sorts.

Description of the Change:
Port null.qubit to the new Device API. The diff for the device file can be tough to parse, so let me share some context for what I did. In the fist commit, I just copy-pasted the contents of default_qubit.py, then replaced all actual computation methods with dummy methods (eg. _simulate(), _vjp(), etc.). It's probably best to filter out that commit using the github UI when reviewing the changes. I'd even recommend reviewing commits 2+3 together, then commits 4-end together as well, just so it's clear to see what I've actually done.

As well, I defined ClassicalShadow.shape according to what it returns with default.qubit. While testing, I found that it ignores batching entirely. We might want to raise an error for that, but for now I'm just raising the error in null.qubit when it tries to go down that road.

Benefits:
null.qubit does what it's supposed to!

Possible Drawbacks:
NullQubit.compute_vjp doesn't work in all cases, particularly with jax. this can be improved in the future, but the reason appears to be that jax is doing some smart stuff behind the scenes with inputs and outputs that I'm not anticipating

[sc-51495]

Copy link
Contributor

Hello. You may have forgotten to update the changelog!
Please edit doc/releases/changelog-dev.md with:

  • A one-to-two sentence description of the change. You may include a small working example for new features.
  • A link back to this PR.
  • Your name (or GitHub username) in the contributors section.

Copy link

codecov bot commented Feb 14, 2024

Codecov Report

All modified and coverable lines are covered by tests ✅

Project coverage is 99.65%. Comparing base (f9bdb32) to head (1d991e5).
Report is 2 commits behind head on master.

Additional details and impacted files
@@            Coverage Diff             @@
##           master    #5211      +/-   ##
==========================================
- Coverage   99.65%   99.65%   -0.01%     
==========================================
  Files         399      399              
  Lines       36901    36703     -198     
==========================================
- Hits        36774    36575     -199     
- Misses        127      128       +1     

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

@albi3ro
Copy link
Contributor

albi3ro commented Feb 16, 2024

Potentially worth seeing how this PR would change with #5200 .

@albi3ro
Copy link
Contributor

albi3ro commented Feb 16, 2024

Note: I replaced null qubit preprocessing with default qubit preprocessing.

Some initial results from running:

n_wires = 40
n_layers = 5

shape = qml.StronglyEntanglingLayers.shape(n_layers=n_layers, n_wires=n_wires)
rng = np.random.default_rng(seed=1239594)
params = qml.numpy.array(rng.random(shape))

@qml.qnode(dev, diff_method="parameter-shift")
def circuit(params):
    qml.StronglyEntanglingLayers(params, wires=range(n_wires))
    return [qml.expval(qml.Z(i)) for i in range(n_wires)]


def cost(params):
    out = circuit(params)
    return qml.math.sum(out)

qml.grad(cost)(params)

Half of the time is in QuantumScript.hash, and 33% is in convert_to_numpy_parameters .... hmmm... We might want to make caching only default with higher order derivatives.

Screenshot 2024-02-16 at 10 28 57 AM

@timmysilv
Copy link
Contributor Author

also plenty of time spent evaluating the batch size : ) So.. this is null.qubit doing what it does best, right? helping us benchmark and improve PennyLane-at-large?

pennylane/devices/null_qubit.py Outdated Show resolved Hide resolved
pennylane/devices/null_qubit.py Outdated Show resolved Hide resolved
pennylane/devices/null_qubit.py Outdated Show resolved Hide resolved
@timmysilv timmysilv marked this pull request as ready for review February 21, 2024 15:20
@timmysilv timmysilv requested a review from a team February 21, 2024 15:21
Copy link
Contributor

@albi3ro albi3ro left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Still needs a changelog entry, but looks great to me!

I'm a little puzzled by the failing tensorflow default qubit test, but I can't imagine it would be because of anything that happened in this PR, so I just retriggered it.

@timmysilv timmysilv requested a review from a team February 23, 2024 17:33
albi3ro added a commit that referenced this pull request Feb 23, 2024
**Context:**

Recent benchmarks (see
#5211 (comment))
have shown that caching adds massive classical overheads, but often does
not actually lead to a reduction in the the number of executions in
normal workflows.

Because of this, we want to make smart choices about when to use
caching. Higher-order derivatives often result in duplicate circuits, so
we need to keep caching when calculating higher-order derivatives. But,
we can make caching opt-in for normal workflows. This will lead to
reduced overheads in the vast majority of workflows.

**Description of the Change:**

The `QNode` keyword argument `cache` defaults to `None`. This is
interpreted as `True` if `max_diff > 1` and `False` otherwise.

**Benefits:**

Vastly reduced classical overheads in most cases.

**Possible Drawbacks:**

Increased number of executions in a few edge cases. But these edge cases
would be fairly convoluted. Somehow a transform would have to turn the
starting tape into two identical tapes.

**Related GitHub Issues:**

**Performance Numbers:**

```
n_layers = 1

dev = qml.device('lightning.qubit', wires=n_wires)

shape = qml.StronglyEntanglingLayers.shape(n_wires=n_wires, n_layers=n_layers)
rng = qml.numpy.random.default_rng(seed=42)
params = rng.random(shape)

@qml.qnode(dev, cache=False)
def circuit(params):
    qml.StronglyEntanglingLayers(params, wires=range(n_wires))
    return qml.expval(qml.Z(0))

@qml.qnode(dev, cache=True)
def circuit_cache(params):
    qml.StronglyEntanglingLayers(params, wires=range(n_wires))
    return qml.expval(qml.Z(0))
```

For `n_wires = 20`
![Screenshot 2024-02-21 at 5 19
49 PM](https://github.com/PennyLaneAI/pennylane/assets/6364575/8b94647c-b0f3-4b0b-8fc7-62c26ac06d70)

But for `n_wires= 10`:
![Screenshot 2024-02-21 at 5 20
43 PM](https://github.com/PennyLaneAI/pennylane/assets/6364575/04248f9e-c949-49a3-a125-21025fd030d6)

For `n_wires=20 n_layers=5`, we have:
![Screenshot 2024-02-21 at 6 02
40 PM](https://github.com/PennyLaneAI/pennylane/assets/6364575/feeccd11-cc52-440d-8655-14a2d15ed93d)

While the cache version does seem to be faster here, that does seem to
be statistical fluctuations.

For `n_wires=10 n_layers=20`:

![Screenshot 2024-02-21 at 6 09
04 PM](https://github.com/PennyLaneAI/pennylane/assets/6364575/64801dd4-a1ce-40c5-84ae-367ddf66aa26)
# Conflicts:
#	doc/releases/changelog-dev.md
@Alex-Preciado Alex-Preciado requested review from PietropaoloFrisoni and removed request for a team February 27, 2024 18:50
@PietropaoloFrisoni
Copy link
Contributor

Great, I'll start reviewing it today. Realistically, it's going to take a few days for me to properly go over this PR, but I'll do it asap

Copy link
Contributor

@PietropaoloFrisoni PietropaoloFrisoni left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Amazing! Thank you for requesting my review (I actually 'studied' this new part of the code and definitely learned a lot).

@timmysilv timmysilv enabled auto-merge (squash) March 4, 2024 14:28
# Conflicts:
#	doc/releases/changelog-dev.md
@timmysilv timmysilv merged commit 643df7b into master Mar 4, 2024
40 checks passed
@timmysilv timmysilv deleted the null-qubit-upgrade branch March 4, 2024 18:24
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants