Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[new opmath 1] Toggle __use_new_opmath #5269

Merged
merged 67 commits into from
Mar 25, 2024
Merged

[new opmath 1] Toggle __use_new_opmath #5269

merged 67 commits into from
Mar 25, 2024

Conversation

Qottmann
Copy link
Contributor

@Qottmann Qottmann commented Feb 27, 2024

PR to toggle the __use_new_opmath and make it default.
This will require several changes in the codebase (and further updates to demos, dataset and plugins).
Bigger changes should be offloaded in separate PRs. Small updates can be gathered here.

Branching: #5269 > #5216 > #5322 > #5335

  • Make sure lightning works
import os, sys
os.environ["OMP_NUM_THREADS"]="8"
import pennylane as qml
from pennylane import numpy as np
from timeit import default_timer as timer

if __name__ == "__main__":
    rng = np.random.default_rng(seed=1337)  # make the results reproducable
    mol = qml.data.load("qchem", molname="H2O", bondlength=0.958, basis="STO-3G")[0]

    hf_state = mol.hf_state
    ham = mol.hamiltonian
    wires = ham.wires
    dev = qml.device("lightning.qubit", wires=wires, batch_obs=True)

    n_electrons = mol.molecule.n_electrons
    singles, doubles = qml.qchem.excitations(n_electrons, len(wires))
    hf_state.requires_grad = False
    hf_ops = []

    for idx,i in enumerate(hf_state):
        if i==1:
            hf_ops.append((qml.PauliX, idx))

    @qml.qnode(dev, diff_method="adjoint")
    def cost(weights):
        for (x,i) in hf_ops: # HF state
            x(i)
        for idx,s in enumerate(singles):
            qml.SingleExcitation(weights[idx], wires=s)
        for idx,d in enumerate(doubles):
            qml.DoubleExcitation(weights[idx+len(singles)], wires=d)
        return qml.expval(ham)

    params = qml.numpy.array(rng.normal(0, np.pi, len(singles) + len(doubles)))
    opt = qml.GradientDescentOptimizer(stepsize=0.5)

    # store the values of the circuit parameter
    angle = [params]
    max_iterations = 50
    procs = int(os.getenv("PL_FWD_BATCH", "0"))
    pre_s = f"qubits={len(wires)},num_terms={len(ham.terms()[0])},procs={procs},"

    energies = []

    for n in range(max_iterations):
        start_grad = timer()
        params, prev_energy = opt.step_and_cost(cost, params)
        energies.append(prev_energy)
        end_grad = timer()
        angle.append(params)
        print(f"{pre_s},Step={n},Time_grad={end_grad-start_grad}")

    start_fwd = timer()
    energy = cost(params)
    energies.append(energy)
    end_fwd = timer()
    print(f"Energies={energies},Time_fwd={end_fwd - start_fwd}")

Copy link
Contributor

Hello. You may have forgotten to update the changelog!
Please edit doc/releases/changelog-dev.md with:

  • A one-to-two sentence description of the change. You may include a small working example for new features.
  • A link back to this PR.
  • Your name (or GitHub username) in the contributors section.

Copy link
Contributor

@albi3ro albi3ro left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🎉

@Qottmann Qottmann merged commit cb29b7a into master Mar 25, 2024
40 checks passed
@Qottmann Qottmann deleted the enable_new_opmath branch March 25, 2024 23:40
mlxd added a commit that referenced this pull request Mar 27, 2024
)

### Before submitting

Please complete the following checklist when submitting a PR:

- [x] All new features must include a unit test.
If you've fixed a bug or added code that should be tested, add a test to
the
      test directory!

- [x] All new functions and code must be clearly commented and
documented.
If you do make documentation changes, make sure that the docs build and
      render correctly by running `make docs`.

- [x] Ensure that the test suite passes, by running `make test`.

- [x] Add a new entry to the `doc/releases/changelog-dev.md` file,
summarizing the
      change, and including a link back to the PR.

- [x] The PennyLane source code conforms to
      [PEP8 standards](https://www.python.org/dev/peps/pep-0008/).
We check all of our code against [Pylint](https://www.pylint.org/).
      To lint modified files, simply `pip install pylint`, and then
      run `pylint pennylane/path/to/file.py`.

When all the above are checked, delete everything above the dashed
line and fill in the pull request template.


------------------------------------------------------------------------------------------------------------

**Context:** When Torch has a GPU backed data-buffer, failures can occur
when attempting to make autoray-dispatched calls to Torch method with
paired CPU data. In this case, for probabilities on the GPU, and
eigenvalues on the host (read from the observables), failures appeared
with `qml.dot`, and can be reproduced from:

```python
import pennylane as qml
import torch
import numpy as np

torch_device="cuda"
dev = qml.device("default.qubit.torch", wires=2, torch_device=torch_device)
ham = qml.Hamiltonian(torch.tensor([0.1, 0.2], requires_grad=True), [qml.PauliX(0), qml.PauliZ(1)])

@qml.qnode(dev, diff_method="backprop", interface="torch")
def circuit():
    qml.RX(np.zeros(5), 0)  # Broadcast the state by applying a broadcasted identity
    return qml.expval(ham)

res = circuit()
assert qml.math.allclose(res, 0.2)
```

This pair modifies the registered `coerce` method for Torch to always
automigrate mixed CPU-GPU data to always favour the associated GPU. In
addition, this method now also catches multi-GPU data, where tensors do
not reside on the same index, and will fail outright. As a longer term
solution, moving the Torch GPU dispatch calls to earlier in the stack
would be more sound, but this fixes the aforementioned issue, at the
expense of always migrating from CPU to GPU.

**Description of the Change:** As above.

**Benefits:** Allows automatic data migration from host to device when
using a GPU backed tensor. In addition, will catch multi-GPU tensor data
when using Torch, and fail due to non-local representations.

**Possible Drawbacks:** Auto migration may not always be wanted. The
alternative solution is to always be explicit about locality, and move
the eigenvalue data to exist on the device at a higher layer in the
stack.

**Related GitHub Issues:** #5269 introduced changes that resulted in GPU
errors.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants