Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[BUG] Probabilities do not sum to one with Torch #5444

Closed
1 task done
trbromley opened this issue Mar 27, 2024 · 1 comment
Closed
1 task done

[BUG] Probabilities do not sum to one with Torch #5444

trbromley opened this issue Mar 27, 2024 · 1 comment
Labels
bug 🐛 Something isn't working

Comments

@trbromley
Copy link
Contributor

Expected behavior

The following code executes regardless of whether the Torch dtype is float64 or float32.

Actual behavior

Error.

Additional information

No response

Source code

import pennylane as qml
import torch

torch.set_default_dtype(torch.float32)

dev = qml.device("default.qubit", shots=1000)

@qml.qnode(dev, interface="torch")
def f(x):
    qml.RX(x, 0)
    return qml.expval(qml.PauliZ(0))

x = torch.tensor([0.4, 0.6])
f(x)

Tracebacks

ValueError: probabilities do not sum to 1

System information

Dev branch of PL and torch v2.2.1.

Existing GitHub issues

  • I have searched existing GitHub issues to make sure the issue does not already exist.
@trbromley trbromley added the bug 🐛 Something isn't working label Mar 27, 2024
@albi3ro
Copy link
Contributor

albi3ro commented Mar 27, 2024

Adding the lines

    norm = qml.math.sum(probs, axis=-1)
    probs = probs/norm

before performing rng.choice seems to solve the problem, but I guess the question is whether we're happy with this solution. Maybe we could add a check to see in the norm is "almost 1" and only re-normalize if it's sufficiently close.

albi3ro pushed a commit that referenced this issue Apr 8, 2024
**Context:** When computing the `expval` of an operator using a quantum
device with the `torch` interface and `default_dtype` set to
`torch.float32`, the probabilities do not sum to one. This error does
not occur if `default_dtype` is set to `torch.float64`.

**Description of the Change:** A renormalization of probabilities is
introduced to overcome the issue. Such renormalization of probabilities
occurs whenever the following two conditions are satisfied: 1) There is
at least one probability that does not sum precisely to one, and 2) Such
difference for all probabilities is between `0` and `1e-07`.

**Benefits:** The error is not raised anymore.

**Possible Drawbacks:** The main drawback is that renormalization can
occur in cases where it should not happen, although it is unlikely since
the cutoff `1e-07` is expected to be reasonably small to prevent such
cases (but large enough not to raise the error anymore).

**Related GitHub Issues:** #5444

[sc-59957]
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug 🐛 Something isn't working
Projects
None yet
Development

No branches or pull requests

3 participants