Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[bugfix] Probabilities do not sum to one with Torch #5462

Merged
merged 21 commits into from
Apr 8, 2024
Merged
Show file tree
Hide file tree
Changes from 12 commits
Commits
Show all changes
21 commits
Select commit Hold shift + click to select a range
1a9a5d2
first possible fix (checking tests)
PietropaoloFrisoni Apr 2, 2024
1b515bc
changing import order
PietropaoloFrisoni Apr 2, 2024
d74f5eb
removing torch import
PietropaoloFrisoni Apr 2, 2024
2c1dcdb
moving change to `sampling.py`
PietropaoloFrisoni Apr 2, 2024
a903435
moving change to `sampling.py`
PietropaoloFrisoni Apr 2, 2024
4cb8b23
cleaning up
PietropaoloFrisoni Apr 2, 2024
4f3c925
updating changelog
PietropaoloFrisoni Apr 2, 2024
27c537e
Merge branch 'master' into bugfix/probs_do_not_sum_to_one_torch
PietropaoloFrisoni Apr 3, 2024
c2dc97d
removing unnecessary brackets
PietropaoloFrisoni Apr 3, 2024
a431763
Merge branch 'master' into bugfix/probs_do_not_sum_to_one_torch
PietropaoloFrisoni Apr 4, 2024
4e1bc12
renormalizing non-batched states as well for safety
PietropaoloFrisoni Apr 4, 2024
d11f2ba
Merge branch 'bugfix/probs_do_not_sum_to_one_torch' of https://github…
PietropaoloFrisoni Apr 4, 2024
d555bae
Merge branch 'master' into bugfix/probs_do_not_sum_to_one_torch
PietropaoloFrisoni Apr 8, 2024
a1e8453
adding unit test
PietropaoloFrisoni Apr 8, 2024
50a156a
cycling just once over `abs_diff`
PietropaoloFrisoni Apr 8, 2024
8e19148
Triggering CI
PietropaoloFrisoni Apr 8, 2024
a8f6cd2
Merge branch 'master' into bugfix/probs_do_not_sum_to_one_torch
PietropaoloFrisoni Apr 8, 2024
9767129
Merge branch 'master' into bugfix/probs_do_not_sum_to_one_torch
PietropaoloFrisoni Apr 8, 2024
424c93f
breaking the error case and the normal case up into two tests.
PietropaoloFrisoni Apr 8, 2024
9d4afe4
Merge branch 'master' into bugfix/probs_do_not_sum_to_one_torch
PietropaoloFrisoni Apr 8, 2024
a8116bc
Triggering CI
PietropaoloFrisoni Apr 8, 2024
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
3 changes: 3 additions & 0 deletions doc/releases/changelog-dev.md
Original file line number Diff line number Diff line change
Expand Up @@ -297,6 +297,9 @@

<h3>Bug fixes 🐛</h3>

* The probabilities now sum to one using the `torch` interface with `default_dtype` set to `torch.float32`.
[(#5462)](https://github.com/PennyLaneAI/pennylane/pull/5462)

* Avoid bounded value failures due to numerical noise with calls to `np.random.binomial`.
[(#5447)](https://github.com/PennyLaneAI/pennylane/pull/5447)

Expand Down
15 changes: 15 additions & 0 deletions pennylane/devices/qubit/sampling.py
PietropaoloFrisoni marked this conversation as resolved.
Show resolved Hide resolved
Original file line number Diff line number Diff line change
Expand Up @@ -448,10 +448,25 @@ def sample_state(
with qml.queuing.QueuingManager.stop_recording():
probs = qml.probs(wires=wires_to_sample).process_state(flat_state, state_wires)

# when using the torch interface with float32 as default dtype,
# probabilities must be renormalized as they may not sum to one
# see https://github.com/PennyLaneAI/pennylane/issues/5444
norm = qml.math.sum(probs, axis=-1)
abs_diff = np.abs(norm - 1.0)
cutoff = 1e-07

if is_state_batched:

if np.any(abs_diff > 0) and np.all(abs_diff < cutoff):
PietropaoloFrisoni marked this conversation as resolved.
Show resolved Hide resolved
probs = probs / norm[:, np.newaxis] if norm.shape else probs / norm

# rng.choice doesn't support broadcasting
samples = np.stack([rng.choice(basis_states, shots, p=p) for p in probs])
else:

if 0 < abs_diff < cutoff:
probs /= norm

samples = rng.choice(basis_states, shots, p=probs)

powers_of_two = 1 << np.arange(num_wires, dtype=np.int64)[::-1]
Expand Down
Loading