-
Notifications
You must be signed in to change notification settings - Fork 603
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
expval
ignores wire labels and uses position implicitly for tensor product ops
#2276
Conversation
Hello. You may have forgotten to update the changelog!
|
[sc-15486] |
The reason some of the tests are failing is because of the following edge case:
The answer would be "correct" if we interpreted the matrix as already being transformed under the provided wire order. |
* Exclude Snapshot from adjoint backwards pass * Add snapshots test for diff_methods * Changelog * Trigger rebuild Co-authored-by: antalszava <antalszava@gmail.com>
* batching ability for non-trainable inputs only following issue #2037 This function creates multiple circuit executions for batched input examples and executes all batch inputs with the same trainable variables. The main difference between the proposed version in the issue and this commit is the input `argnum` this indicates the location of the given input hence gives the ability to work across platforms. * adaptation for batch execution * improvements according to PR rules * minor update according to PR errors * modify according to codefactor-io * reformatted code style * adjust line lenght for linting * update linting * disable linting for too many arguments * add testing for batch input in keras * format test_keras.py * add tests for remaining functions * adapt the defaults * update docstring according to @josh146 's suggestions * remove keras sterilazation * add batch_input to the docstring * docstring update for readability: pennylane/transforms/batch_input.py Co-authored-by: Josh Izaac <josh146@gmail.com> * minor fix in documentation * change assertion error to valueerror * test valueerror * modify the definition of argnum * change argnum -> batch_idx * update changelog-dev.md * apply @josh146 's suggestions * linting * tests * more Co-authored-by: antalszava <antalszava@gmail.com> Co-authored-by: Josh Izaac <josh146@gmail.com>
…lane into wire_order_bug
* Redo imports * Update docs * Update wording * Fix ID * Add to docs * Add to docs * Fix * Update docstrings * Use nx.MultiDiGraph * Update fragment_graph * Update graph_to_tape * Update remap_tape_wires * Rename to expand_fragment_tape * Update expand_fragment_tape * Update CutStrategy * Update qcut_processing_fn * Remove note * Update cut_circuit * Work on docs * Add to docs * Update pennylane/transforms/qcut.py * Add to changelog * Move device definition * Mention WireCut * Move details * QCut module * Fix image location * Fix init * Apply suggestions from code review Co-authored-by: anthayes92 <34694788+anthayes92@users.noreply.github.com> * Add link to communication graph * Reword * Move around * fix * fix Co-authored-by: antalszava <antalszava@gmail.com> Co-authored-by: Josh Izaac <josh146@gmail.com> Co-authored-by: anthayes92 <34694788+anthayes92@users.noreply.github.com>
* Redo imports * Update docs * Update wording * Fix ID * Add to docs * Add to docs * Fix * Update docstrings * Use nx.MultiDiGraph * Update fragment_graph * Update graph_to_tape * Update remap_tape_wires * Rename to expand_fragment_tape * Update expand_fragment_tape * Update CutStrategy * Update qcut_processing_fn * Remove note * Update cut_circuit * Work on docs * Add to docs * Update pennylane/transforms/qcut.py * Add to changelog * Move device definition * Mention WireCut * Move details * QCut module * Fix image location * Fix init * Update changelog * Link to docs page * Update wording * Apply suggestions from code review Co-authored-by: Josh Izaac <josh146@gmail.com> * Move * Update doc/releases/changelog-dev.md Co-authored-by: Nathan Killoran <co9olguy@users.noreply.github.com> * Remove * Update Co-authored-by: antalszava <antalszava@gmail.com> Co-authored-by: Josh Izaac <josh146@gmail.com> Co-authored-by: Nathan Killoran <co9olguy@users.noreply.github.com>
* Produce consisten output shapes In the absence of trainable params, some gradient transforms did not produce an empty tuple yet like the rest of our functions. * Minor formatting changes in param_shift_hessian * Fix param_shift_hessian for all zero diff_methods * Fix missing requires_grad & catch expected warning * Changelog Co-authored-by: Jay Soni <jbsoni@uwaterloo.ca>
* Deprecate the Jacobian tape * Deprecate tape subclasses * changelog * more test fixes * tests * Apply suggestions from code review Co-authored-by: antalszava <antalszava@gmail.com> Co-authored-by: antalszava <antalszava@gmail.com>
* generator doc fixes * more fixing
* Remove temp fixes for lightning * Include diff_method tests for all devices * Changelog * Update CI to use pennylane-lightning dev Co-authored-by: antalszava <antalszava@gmail.com>
* Fix Operator docstring hyperrefs * Fix example for top-level matrix function * Add example to Snapshot op docstring * Fix tape drawing examples in docs * Apply suggestions from code review * Update pennylane/ops/snapshot.py Co-authored-by: Christina Lee <christina@xanadu.ai>
* Add qfunc and else to cond's UsageDetails * copy when inverting MV under the hood; add equivalent test case for inversion; add err msg when calling == of MV with unexpected typed obj; more examples * format * test docstr * format * correct examples * format * docstring * have #2300 on rc too * lambda example * intro extend, docstring * changelog PR num * link * note update * updates * Apply suggestions from code review * updates Co-authored-by: Christina Lee <christina@xanadu.ai>
…lane into wire_order_bug
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks good 🥇 Amazing solution, goes a long way with understanding what's happening here.
Codefactor seems to cause some issues still and left come comments, but once those are addressed, this is good to be merged from my side.
Co-authored-by: antalszava <antalszava@gmail.com>
Context:
Consider the following circuit:
But the expected behaviour should be:
It seems as though the wire labels are being ignored and instead the position of the operations in the tensor product are being used to determine how to compute the expectation value.
Additionally:
This PR also addresses the following bug which was discovered during the testing of this feature:
Which produces the following result:
In the above result we notice that the expectation values computed for observables 4, 5 do not match the others.
Description of the Change:
The
expval(self, observable, shot_range=None, bin_size=None)
function defined in_qubit_device.py
uses the eigenvalues of the provided tensor product observable and the probability vector of the state to compute the expectation value whenshots=None
.The first bug arises because of lexicographic sorting of the eigvals based on the wires the observables act on in the function
qml.Tensor.get_eigvals()
. This results in the eigvals of the two observablesPauliZ @ Identity
andIdentity @ PauliZ
being the same.This is addressed by removing the sorting in the
get_eigvals()
function.The second bug arises because of the way we permute the probability vector before computing the expectation value. The wire order of the observables is passed into the
device.probability()
function which then computes the marginal probability distribution over the observable wires and permutes the distribution according to the wire order of the observable wires. This permutation is incorrect and instead we should be using the inverse of this permutation. The reason such a bug was not caught sooner is because in most cases the tensor product observables usually contain 2 terms at most and/or there is only one qubit flipped out of position. In these common cases, the wire order permutation is its own inverse, and thus produces the correct expected value. But in situations with more then one qubit flip, it breaks.This is addressed by computing the permutation before hand, then permuting the wires before they are passed into the probability function.
Related GitHub Issues:
This PR closes this issue