-
Notifications
You must be signed in to change notification settings - Fork 603
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Batch run #2069
Batch run #2069
Conversation
This function creates multiple circuit executions for batched input examples and executes all batch inputs with the same trainable variables. The main difference between the proposed version in the issue and this commit is the input `argnum` this indicates the location of the given input hence gives the ability to work across platforms.
Hi @jackaraz, thanks for making this PR! I just wanted to check in to see if there was anything you had any questions regarding. If not, would this be ready for a code review? |
Hi @josh146 not at the moment, I guess my code does not meet all the requirements that you have about code formatting. I can modify it with black if you like. Also after seeing @antalszava 's proposal in here I was thinking that might be possible to update the TensorFlow layer a bit further. However, this will require creating it as a Model rather than a layer which might create usability issues for a regular user. |
Codecov Report
@@ Coverage Diff @@
## v0.22.0-rc0 #2069 +/- ##
==============================================
Coverage ? 99.32%
==============================================
Files ? 242
Lines ? 19138
Branches ? 0
==============================================
Hits ? 19008
Misses ? 130
Partials ? 0 Continue to review full report at Codecov.
|
Yes, that would be great 🙂
Feel free to discuss any thoughts you have regarding this in more detail, either here on in the issue! As you are currently working on a model that requires this feature, your feedback is invaluable. |
Hi @josh146
This is done. But I still see some linting errors; I believe it doesn't like how I present
I do have a few more implementations based on TF using pennylane, such as quantum natural gradient (as far as I know, this does not exist with TF backend at the moment, my implementation is based on pennylane's original one) and for parallelization on GPU/CPU. However, my implementation is not very generic; I'm basically writing a Keras model and a custom training within it. But it requires different models for different implementations i.e. one model for pure Quantum circuit-based networks, one for hybrid where the classical portion can train with traditional SGD and the quantum portion can train with QNGD within the same training sequence etc. However, as I said, these are case dependent implementations, we will release the paper soon, and I can show you the implementations, but I'm not sure if it would be relevant for pennylane. |
Thanks @jackaraz!
This actually would be very interesting to see, as we have long been wanting to extend the QNG optimizer to support other interfaces. However, we still have various design questions, so your implementation --- even if not eventually merged into the codebase --- could still be helpful in resolving these questions! |
Hello everyone. I was preparing to open a new issue regarding the batching of input data. However, it seems that @jackaraz and @josh146 have already discussed this in the forum, before starting this PR, and would like to ensure whether we have the same goal in mind. The implementation of My understanding is that the goal of this PR is to implement the above functionality. I am wondering whether your implementation @jackaraz is |
Hi @vbelis, exactly the goal is to batch like in ML context. The function that I implemented should be generic, I tested with pennylane NumPy interface and TF but not PyTorch since I don't use PyTorch. However, since it uses the penny lane backend the separation between trainable and non-trainable parameters should be already available to the |
Hi @jackaraz, thanks for the fast reply. OK I will pull and do some tests locally to check the behavior, since I am using |
Hi @vbelis I wasn't planning to add anything else, not really sure why the tests are failing, if you have any feedback would be much appreciated to finalize this PR. |
I will try to take a look soon. However, it would be much more efficient if some of the main developers, e.g. @josh146, could provide some tips regarding the failed tests: ( Regarding the failed checks by |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Regarding the failed CodeFactor
check.
Hi @vbelis, welcome! Regarding the codefactor warning,
I think it is fine to disable this one. That can be done by adding a comment # pylint: disable=too-many-arguments directly after the function signature 🙂 |
This is a CI check that verifies that all lines of code added in this PR are tested :) In this case, it appears that this particular line is not being called by any of the unit tests.
This is much appreciated @vbelis! Let me know if you have any questions :) |
Co-authored-by: Josh Izaac <josh146@gmail.com>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
All requests have been implemented.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks @jackaraz for taking into account all suggestions! This is a really nice addition, will be great to have this feature in 🎉
I've left some minor comments mostly regarding the documentation, but happy to now approve this PR 🙂
Thanks @josh146; all suggestions have been implemented. |
* [Bug] Exclude Snapshot from adjoint backwards pass (#2289) * Exclude Snapshot from adjoint backwards pass * Add snapshots test for diff_methods * Changelog * Trigger rebuild Co-authored-by: antalszava <antalszava@gmail.com> * Work on consistency of `Operator`s (#2287) * some inconsistencies * swap basis * undo duplicated wire in test * changelog * revert snapshot wires change * unused import * merge rc * revert accidental changelog merge Co-authored-by: Josh Izaac <josh146@gmail.com> * Batch run (#2069) * batching ability for non-trainable inputs only following issue #2037 This function creates multiple circuit executions for batched input examples and executes all batch inputs with the same trainable variables. The main difference between the proposed version in the issue and this commit is the input `argnum` this indicates the location of the given input hence gives the ability to work across platforms. * adaptation for batch execution * improvements according to PR rules * minor update according to PR errors * modify according to codefactor-io * reformatted code style * adjust line lenght for linting * update linting * disable linting for too many arguments * add testing for batch input in keras * format test_keras.py * add tests for remaining functions * adapt the defaults * update docstring according to @josh146 's suggestions * remove keras sterilazation * add batch_input to the docstring * docstring update for readability: pennylane/transforms/batch_input.py Co-authored-by: Josh Izaac <josh146@gmail.com> * minor fix in documentation * change assertion error to valueerror * test valueerror * modify the definition of argnum * change argnum -> batch_idx * update changelog-dev.md * apply @josh146 's suggestions * linting * tests * more Co-authored-by: antalszava <antalszava@gmail.com> Co-authored-by: Josh Izaac <josh146@gmail.com> * Circuit cutting: Tidy up documentation (#2279) * Redo imports * Update docs * Update wording * Fix ID * Add to docs * Add to docs * Fix * Update docstrings * Use nx.MultiDiGraph * Update fragment_graph * Update graph_to_tape * Update remap_tape_wires * Rename to expand_fragment_tape * Update expand_fragment_tape * Update CutStrategy * Update qcut_processing_fn * Remove note * Update cut_circuit * Work on docs * Add to docs * Update pennylane/transforms/qcut.py * Add to changelog * Move device definition * Mention WireCut * Move details * QCut module * Fix image location * Fix init * Apply suggestions from code review Co-authored-by: anthayes92 <34694788+anthayes92@users.noreply.github.com> * Add link to communication graph * Reword * Move around * fix * fix Co-authored-by: antalszava <antalszava@gmail.com> Co-authored-by: Josh Izaac <josh146@gmail.com> Co-authored-by: anthayes92 <34694788+anthayes92@users.noreply.github.com> * Circuit cutting: update changelog (#2290) * Redo imports * Update docs * Update wording * Fix ID * Add to docs * Add to docs * Fix * Update docstrings * Use nx.MultiDiGraph * Update fragment_graph * Update graph_to_tape * Update remap_tape_wires * Rename to expand_fragment_tape * Update expand_fragment_tape * Update CutStrategy * Update qcut_processing_fn * Remove note * Update cut_circuit * Work on docs * Add to docs * Update pennylane/transforms/qcut.py * Add to changelog * Move device definition * Mention WireCut * Move details * QCut module * Fix image location * Fix init * Update changelog * Link to docs page * Update wording * Apply suggestions from code review Co-authored-by: Josh Izaac <josh146@gmail.com> * Move * Update doc/releases/changelog-dev.md Co-authored-by: Nathan Killoran <co9olguy@users.noreply.github.com> * Remove * Update Co-authored-by: antalszava <antalszava@gmail.com> Co-authored-by: Josh Izaac <josh146@gmail.com> Co-authored-by: Nathan Killoran <co9olguy@users.noreply.github.com> * Minor gradient fixes (#2299) * Produce consisten output shapes In the absence of trainable params, some gradient transforms did not produce an empty tuple yet like the rest of our functions. * Minor formatting changes in param_shift_hessian * Fix param_shift_hessian for all zero diff_methods * Fix missing requires_grad & catch expected warning * Changelog Co-authored-by: Jay Soni <jbsoni@uwaterloo.ca> Co-authored-by: David Ittah <dime10@users.noreply.github.com> Co-authored-by: David Wierichs <davidwierichs@gmail.com> Co-authored-by: Josh Izaac <josh146@gmail.com> Co-authored-by: Jack Y. Araz <jackaraz@gmail.com> Co-authored-by: Tom Bromley <49409390+trbromley@users.noreply.github.com> Co-authored-by: anthayes92 <34694788+anthayes92@users.noreply.github.com> Co-authored-by: Nathan Killoran <co9olguy@users.noreply.github.com> Co-authored-by: Jay Soni <jbsoni@uwaterloo.ca>
* [Bug] Exclude Snapshot from adjoint backwards pass (#2289) * Exclude Snapshot from adjoint backwards pass * Add snapshots test for diff_methods * Changelog * Trigger rebuild Co-authored-by: antalszava <antalszava@gmail.com> * Work on consistency of `Operator`s (#2287) * some inconsistencies * swap basis * undo duplicated wire in test * changelog * revert snapshot wires change * unused import * merge rc * revert accidental changelog merge Co-authored-by: Josh Izaac <josh146@gmail.com> * Batch run (#2069) * batching ability for non-trainable inputs only following issue #2037 This function creates multiple circuit executions for batched input examples and executes all batch inputs with the same trainable variables. The main difference between the proposed version in the issue and this commit is the input `argnum` this indicates the location of the given input hence gives the ability to work across platforms. * adaptation for batch execution * improvements according to PR rules * minor update according to PR errors * modify according to codefactor-io * reformatted code style * adjust line lenght for linting * update linting * disable linting for too many arguments * add testing for batch input in keras * format test_keras.py * add tests for remaining functions * adapt the defaults * update docstring according to @josh146 's suggestions * remove keras sterilazation * add batch_input to the docstring * docstring update for readability: pennylane/transforms/batch_input.py Co-authored-by: Josh Izaac <josh146@gmail.com> * minor fix in documentation * change assertion error to valueerror * test valueerror * modify the definition of argnum * change argnum -> batch_idx * update changelog-dev.md * apply @josh146 's suggestions * linting * tests * more Co-authored-by: antalszava <antalszava@gmail.com> Co-authored-by: Josh Izaac <josh146@gmail.com> * Circuit cutting: Tidy up documentation (#2279) * Redo imports * Update docs * Update wording * Fix ID * Add to docs * Add to docs * Fix * Update docstrings * Use nx.MultiDiGraph * Update fragment_graph * Update graph_to_tape * Update remap_tape_wires * Rename to expand_fragment_tape * Update expand_fragment_tape * Update CutStrategy * Update qcut_processing_fn * Remove note * Update cut_circuit * Work on docs * Add to docs * Update pennylane/transforms/qcut.py * Add to changelog * Move device definition * Mention WireCut * Move details * QCut module * Fix image location * Fix init * Apply suggestions from code review Co-authored-by: anthayes92 <34694788+anthayes92@users.noreply.github.com> * Add link to communication graph * Reword * Move around * fix * fix Co-authored-by: antalszava <antalszava@gmail.com> Co-authored-by: Josh Izaac <josh146@gmail.com> Co-authored-by: anthayes92 <34694788+anthayes92@users.noreply.github.com> * Circuit cutting: update changelog (#2290) * Redo imports * Update docs * Update wording * Fix ID * Add to docs * Add to docs * Fix * Update docstrings * Use nx.MultiDiGraph * Update fragment_graph * Update graph_to_tape * Update remap_tape_wires * Rename to expand_fragment_tape * Update expand_fragment_tape * Update CutStrategy * Update qcut_processing_fn * Remove note * Update cut_circuit * Work on docs * Add to docs * Update pennylane/transforms/qcut.py * Add to changelog * Move device definition * Mention WireCut * Move details * QCut module * Fix image location * Fix init * Update changelog * Link to docs page * Update wording * Apply suggestions from code review Co-authored-by: Josh Izaac <josh146@gmail.com> * Move * Update doc/releases/changelog-dev.md Co-authored-by: Nathan Killoran <co9olguy@users.noreply.github.com> * Remove * Update Co-authored-by: antalszava <antalszava@gmail.com> Co-authored-by: Josh Izaac <josh146@gmail.com> Co-authored-by: Nathan Killoran <co9olguy@users.noreply.github.com> * Minor gradient fixes (#2299) * Produce consisten output shapes In the absence of trainable params, some gradient transforms did not produce an empty tuple yet like the rest of our functions. * Minor formatting changes in param_shift_hessian * Fix param_shift_hessian for all zero diff_methods * Fix missing requires_grad & catch expected warning * Changelog Co-authored-by: Jay Soni <jbsoni@uwaterloo.ca> * Deprecate jacobian tape (#2306) * Deprecate the Jacobian tape * Deprecate tape subclasses * changelog * more test fixes * tests * Apply suggestions from code review Co-authored-by: antalszava <antalszava@gmail.com> Co-authored-by: antalszava <antalszava@gmail.com> * `qml.generator` doc fixes (#2309) * generator doc fixes * more fixing * Snapshot: remove temporary fixes for lightning device (#2291) * Remove temp fixes for lightning * Include diff_method tests for all devices * Changelog * Update CI to use pennylane-lightning dev Co-authored-by: antalszava <antalszava@gmail.com> * Docs fixes for v0.22.0 release (#2312) * Fix Operator docstring hyperrefs * Fix example for top-level matrix function * Add example to Snapshot op docstring * Fix tape drawing examples in docs * Apply suggestions from code review * Update pennylane/ops/snapshot.py Co-authored-by: Christina Lee <christina@xanadu.ai> * Extend the conditional operations documentation (#2294) * Add qfunc and else to cond's UsageDetails * copy when inverting MV under the hood; add equivalent test case for inversion; add err msg when calling == of MV with unexpected typed obj; more examples * format * test docstr * format * correct examples * format * docstring * have #2300 on rc too * lambda example * intro extend, docstring * changelog PR num * link * note update * updates * Apply suggestions from code review * updates Co-authored-by: Christina Lee <christina@xanadu.ai> * Add `qml.generator(op)` backwards compatibility (#2305) * Add qml.generator(op) backwards compatibility * Apply suggestions from code review Co-authored-by: David Wierichs <davidwierichs@gmail.com> * fix docstring Co-authored-by: David Wierichs <davidwierichs@gmail.com> * fixed docs * use better function refs * pin pennylane-lightning version in CI (#2318) * Amend docstring examples for `compute_matrix` and `compute_eigvals` (#2314) * array() * print() to get the output formatting correct * revert array() * print() Co-authored-by: Maria Schuld <mariaschuld@gmail.com> Co-authored-by: Josh Izaac <josh146@gmail.com> * Few docstring updates in prep for `v0.22.0` (#2311) * updates * - lower * updates * contract_tensor ref updates * rename test file: batch_input * explicit requires_grad upon param generation * torch.Tensor as type * Update pennylane/transforms/__init__.py Co-authored-by: Tom Bromley <49409390+trbromley@users.noreply.github.com> * use tf in example Co-authored-by: Josh Izaac <josh146@gmail.com> Co-authored-by: Tom Bromley <49409390+trbromley@users.noreply.github.com> * Support for controlled & adjoint in Snapshot/Barrier (#2315) * controlled, adjoint * Remove print * Match adjoint signature of parent class * Add tests for ctrl/adj support * Update Barrier.adjoint signature * Add ctrl test for Barrier * Update tests/test_debugging.py * Update tests/ops/test_snapshot.py * Update tests/ops/test_snapshot.py * changelog * trigger build Co-authored-by: Antal Szava <antalszava@gmail.com> * ControlledQubitUnitary should raise `DecompositionUndefinedError` (#2320) * DecompositionUndefinedError * changelog * trigger check * `v0.22.0` release notes (#2303) * version * log ref * rename * sections; emojis * format * improvements order * format * addition; collabs; v0.21.0 collab alphabet fix * reorder * collab; deprecation item * more PRs; collab list extended * update * sections * op section break up * correct matrix example * suggestions * suggestions * a few more * fix typo in code * update * no tf import * update Co-authored-by: Josh Izaac <josh146@gmail.com> * don't pull test Lightning (requires v0.22.0 to land) * require v0.22 Lightning or higher * changelog list extend * update * Pin Lightning `>=0.22` (#2324) * pin lightning >=0.22 * Update tests.yml Co-authored-by: Josh Izaac <josh146@gmail.com> Co-authored-by: David Ittah <dime10@users.noreply.github.com> Co-authored-by: David Wierichs <davidwierichs@gmail.com> Co-authored-by: Josh Izaac <josh146@gmail.com> Co-authored-by: Jack Y. Araz <jackaraz@gmail.com> Co-authored-by: Tom Bromley <49409390+trbromley@users.noreply.github.com> Co-authored-by: anthayes92 <34694788+anthayes92@users.noreply.github.com> Co-authored-by: Nathan Killoran <co9olguy@users.noreply.github.com> Co-authored-by: Jay Soni <jbsoni@uwaterloo.ca> Co-authored-by: Christina Lee <christina@xanadu.ai> Co-authored-by: Maria Schuld <mariaschuld@gmail.com>
* [Bug] Exclude Snapshot from adjoint backwards pass (#2289) * Exclude Snapshot from adjoint backwards pass * Add snapshots test for diff_methods * Changelog * Trigger rebuild Co-authored-by: antalszava <antalszava@gmail.com> * Work on consistency of `Operator`s (#2287) * some inconsistencies * swap basis * undo duplicated wire in test * changelog * revert snapshot wires change * unused import * merge rc * revert accidental changelog merge Co-authored-by: Josh Izaac <josh146@gmail.com> * Batch run (#2069) * batching ability for non-trainable inputs only following issue #2037 This function creates multiple circuit executions for batched input examples and executes all batch inputs with the same trainable variables. The main difference between the proposed version in the issue and this commit is the input `argnum` this indicates the location of the given input hence gives the ability to work across platforms. * adaptation for batch execution * improvements according to PR rules * minor update according to PR errors * modify according to codefactor-io * reformatted code style * adjust line lenght for linting * update linting * disable linting for too many arguments * add testing for batch input in keras * format test_keras.py * add tests for remaining functions * adapt the defaults * update docstring according to @josh146 's suggestions * remove keras sterilazation * add batch_input to the docstring * docstring update for readability: pennylane/transforms/batch_input.py Co-authored-by: Josh Izaac <josh146@gmail.com> * minor fix in documentation * change assertion error to valueerror * test valueerror * modify the definition of argnum * change argnum -> batch_idx * update changelog-dev.md * apply @josh146 's suggestions * linting * tests * more Co-authored-by: antalszava <antalszava@gmail.com> Co-authored-by: Josh Izaac <josh146@gmail.com> * Circuit cutting: Tidy up documentation (#2279) * Redo imports * Update docs * Update wording * Fix ID * Add to docs * Add to docs * Fix * Update docstrings * Use nx.MultiDiGraph * Update fragment_graph * Update graph_to_tape * Update remap_tape_wires * Rename to expand_fragment_tape * Update expand_fragment_tape * Update CutStrategy * Update qcut_processing_fn * Remove note * Update cut_circuit * Work on docs * Add to docs * Update pennylane/transforms/qcut.py * Add to changelog * Move device definition * Mention WireCut * Move details * QCut module * Fix image location * Fix init * Apply suggestions from code review Co-authored-by: anthayes92 <34694788+anthayes92@users.noreply.github.com> * Add link to communication graph * Reword * Move around * fix * fix Co-authored-by: antalszava <antalszava@gmail.com> Co-authored-by: Josh Izaac <josh146@gmail.com> Co-authored-by: anthayes92 <34694788+anthayes92@users.noreply.github.com> * Circuit cutting: update changelog (#2290) * Redo imports * Update docs * Update wording * Fix ID * Add to docs * Add to docs * Fix * Update docstrings * Use nx.MultiDiGraph * Update fragment_graph * Update graph_to_tape * Update remap_tape_wires * Rename to expand_fragment_tape * Update expand_fragment_tape * Update CutStrategy * Update qcut_processing_fn * Remove note * Update cut_circuit * Work on docs * Add to docs * Update pennylane/transforms/qcut.py * Add to changelog * Move device definition * Mention WireCut * Move details * QCut module * Fix image location * Fix init * Update changelog * Link to docs page * Update wording * Apply suggestions from code review Co-authored-by: Josh Izaac <josh146@gmail.com> * Move * Update doc/releases/changelog-dev.md Co-authored-by: Nathan Killoran <co9olguy@users.noreply.github.com> * Remove * Update Co-authored-by: antalszava <antalszava@gmail.com> Co-authored-by: Josh Izaac <josh146@gmail.com> Co-authored-by: Nathan Killoran <co9olguy@users.noreply.github.com> * Minor gradient fixes (#2299) * Produce consisten output shapes In the absence of trainable params, some gradient transforms did not produce an empty tuple yet like the rest of our functions. * Minor formatting changes in param_shift_hessian * Fix param_shift_hessian for all zero diff_methods * Fix missing requires_grad & catch expected warning * Changelog Co-authored-by: Jay Soni <jbsoni@uwaterloo.ca> * Deprecate jacobian tape (#2306) * Deprecate the Jacobian tape * Deprecate tape subclasses * changelog * more test fixes * tests * Apply suggestions from code review Co-authored-by: antalszava <antalszava@gmail.com> Co-authored-by: antalszava <antalszava@gmail.com> * `qml.generator` doc fixes (#2309) * generator doc fixes * more fixing * Snapshot: remove temporary fixes for lightning device (#2291) * Remove temp fixes for lightning * Include diff_method tests for all devices * Changelog * Update CI to use pennylane-lightning dev Co-authored-by: antalszava <antalszava@gmail.com> * Docs fixes for v0.22.0 release (#2312) * Fix Operator docstring hyperrefs * Fix example for top-level matrix function * Add example to Snapshot op docstring * Fix tape drawing examples in docs * Apply suggestions from code review * Update pennylane/ops/snapshot.py Co-authored-by: Christina Lee <christina@xanadu.ai> * Extend the conditional operations documentation (#2294) * Add qfunc and else to cond's UsageDetails * copy when inverting MV under the hood; add equivalent test case for inversion; add err msg when calling == of MV with unexpected typed obj; more examples * format * test docstr * format * correct examples * format * docstring * have #2300 on rc too * lambda example * intro extend, docstring * changelog PR num * link * note update * updates * Apply suggestions from code review * updates Co-authored-by: Christina Lee <christina@xanadu.ai> * Add `qml.generator(op)` backwards compatibility (#2305) * Add qml.generator(op) backwards compatibility * Apply suggestions from code review Co-authored-by: David Wierichs <davidwierichs@gmail.com> * fix docstring Co-authored-by: David Wierichs <davidwierichs@gmail.com> * fixed docs * use better function refs * pin pennylane-lightning version in CI (#2318) * Amend docstring examples for `compute_matrix` and `compute_eigvals` (#2314) * array() * print() to get the output formatting correct * revert array() * print() Co-authored-by: Maria Schuld <mariaschuld@gmail.com> Co-authored-by: Josh Izaac <josh146@gmail.com> * Few docstring updates in prep for `v0.22.0` (#2311) * updates * - lower * updates * contract_tensor ref updates * rename test file: batch_input * explicit requires_grad upon param generation * torch.Tensor as type * Update pennylane/transforms/__init__.py Co-authored-by: Tom Bromley <49409390+trbromley@users.noreply.github.com> * use tf in example Co-authored-by: Josh Izaac <josh146@gmail.com> Co-authored-by: Tom Bromley <49409390+trbromley@users.noreply.github.com> * Support for controlled & adjoint in Snapshot/Barrier (#2315) * controlled, adjoint * Remove print * Match adjoint signature of parent class * Add tests for ctrl/adj support * Update Barrier.adjoint signature * Add ctrl test for Barrier * Update tests/test_debugging.py * Update tests/ops/test_snapshot.py * Update tests/ops/test_snapshot.py * changelog * trigger build Co-authored-by: Antal Szava <antalszava@gmail.com> * ControlledQubitUnitary should raise `DecompositionUndefinedError` (#2320) * DecompositionUndefinedError * changelog * trigger check * `v0.22.0` release notes (#2303) * version * log ref * rename * sections; emojis * format * improvements order * format * addition; collabs; v0.21.0 collab alphabet fix * reorder * collab; deprecation item * more PRs; collab list extended * update * sections * op section break up * correct matrix example * suggestions * suggestions * a few more * fix typo in code * update * no tf import * update Co-authored-by: Josh Izaac <josh146@gmail.com> * Pin Lightning `>=0.22` (#2324) * pin lightning >=0.22 * Update tests.yml Co-authored-by: Josh Izaac <josh146@gmail.com> * bump the version to 0.22.1 * v0.22.1 release notes * Fix queuing unexpected operators with qml.measure; changelog * notes Co-authored-by: David Ittah <dime10@users.noreply.github.com> Co-authored-by: David Wierichs <davidwierichs@gmail.com> Co-authored-by: Josh Izaac <josh146@gmail.com> Co-authored-by: Jack Y. Araz <jackaraz@gmail.com> Co-authored-by: Tom Bromley <49409390+trbromley@users.noreply.github.com> Co-authored-by: anthayes92 <34694788+anthayes92@users.noreply.github.com> Co-authored-by: Nathan Killoran <co9olguy@users.noreply.github.com> Co-authored-by: Jay Soni <jbsoni@uwaterloo.ca> Co-authored-by: Christina Lee <christina@xanadu.ai> Co-authored-by: Maria Schuld <mariaschuld@gmail.com> Co-authored-by: Guillermo Alonso-Linaje <65235481+KetpuntoG@users.noreply.github.com>
…product ops (#2276) * sorted wires to match eig vals when computing expectation value! * Added tests and change log entry * lint * remove redundant lines of code * small typo * [Bug] Exclude Snapshot from adjoint backwards pass (#2289) * Exclude Snapshot from adjoint backwards pass * Add snapshots test for diff_methods * Changelog * Trigger rebuild Co-authored-by: antalszava <antalszava@gmail.com> * Work on consistency of `Operator`s (#2287) * some inconsistencies * swap basis * undo duplicated wire in test * changelog * revert snapshot wires change * unused import * merge rc * revert accidental changelog merge Co-authored-by: Josh Izaac <josh146@gmail.com> * Batch run (#2069) * batching ability for non-trainable inputs only following issue #2037 This function creates multiple circuit executions for batched input examples and executes all batch inputs with the same trainable variables. The main difference between the proposed version in the issue and this commit is the input `argnum` this indicates the location of the given input hence gives the ability to work across platforms. * adaptation for batch execution * improvements according to PR rules * minor update according to PR errors * modify according to codefactor-io * reformatted code style * adjust line lenght for linting * update linting * disable linting for too many arguments * add testing for batch input in keras * format test_keras.py * add tests for remaining functions * adapt the defaults * update docstring according to @josh146 's suggestions * remove keras sterilazation * add batch_input to the docstring * docstring update for readability: pennylane/transforms/batch_input.py Co-authored-by: Josh Izaac <josh146@gmail.com> * minor fix in documentation * change assertion error to valueerror * test valueerror * modify the definition of argnum * change argnum -> batch_idx * update changelog-dev.md * apply @josh146 's suggestions * linting * tests * more Co-authored-by: antalszava <antalszava@gmail.com> Co-authored-by: Josh Izaac <josh146@gmail.com> * removed ordering in * re-push to run tests * push * debugging * Circuit cutting: Tidy up documentation (#2279) * Redo imports * Update docs * Update wording * Fix ID * Add to docs * Add to docs * Fix * Update docstrings * Use nx.MultiDiGraph * Update fragment_graph * Update graph_to_tape * Update remap_tape_wires * Rename to expand_fragment_tape * Update expand_fragment_tape * Update CutStrategy * Update qcut_processing_fn * Remove note * Update cut_circuit * Work on docs * Add to docs * Update pennylane/transforms/qcut.py * Add to changelog * Move device definition * Mention WireCut * Move details * QCut module * Fix image location * Fix init * Apply suggestions from code review Co-authored-by: anthayes92 <34694788+anthayes92@users.noreply.github.com> * Add link to communication graph * Reword * Move around * fix * fix Co-authored-by: antalszava <antalszava@gmail.com> Co-authored-by: Josh Izaac <josh146@gmail.com> Co-authored-by: anthayes92 <34694788+anthayes92@users.noreply.github.com> * Circuit cutting: update changelog (#2290) * Redo imports * Update docs * Update wording * Fix ID * Add to docs * Add to docs * Fix * Update docstrings * Use nx.MultiDiGraph * Update fragment_graph * Update graph_to_tape * Update remap_tape_wires * Rename to expand_fragment_tape * Update expand_fragment_tape * Update CutStrategy * Update qcut_processing_fn * Remove note * Update cut_circuit * Work on docs * Add to docs * Update pennylane/transforms/qcut.py * Add to changelog * Move device definition * Mention WireCut * Move details * QCut module * Fix image location * Fix init * Update changelog * Link to docs page * Update wording * Apply suggestions from code review Co-authored-by: Josh Izaac <josh146@gmail.com> * Move * Update doc/releases/changelog-dev.md Co-authored-by: Nathan Killoran <co9olguy@users.noreply.github.com> * Remove * Update Co-authored-by: antalszava <antalszava@gmail.com> Co-authored-by: Josh Izaac <josh146@gmail.com> Co-authored-by: Nathan Killoran <co9olguy@users.noreply.github.com> * Minor gradient fixes (#2299) * Produce consisten output shapes In the absence of trainable params, some gradient transforms did not produce an empty tuple yet like the rest of our functions. * Minor formatting changes in param_shift_hessian * Fix param_shift_hessian for all zero diff_methods * Fix missing requires_grad & catch expected warning * Changelog Co-authored-by: Jay Soni <jbsoni@uwaterloo.ca> * Deprecate jacobian tape (#2306) * Deprecate the Jacobian tape * Deprecate tape subclasses * changelog * more test fixes * tests * Apply suggestions from code review Co-authored-by: antalszava <antalszava@gmail.com> Co-authored-by: antalszava <antalszava@gmail.com> * `qml.generator` doc fixes (#2309) * generator doc fixes * more fixing * Snapshot: remove temporary fixes for lightning device (#2291) * Remove temp fixes for lightning * Include diff_method tests for all devices * Changelog * Update CI to use pennylane-lightning dev Co-authored-by: antalszava <antalszava@gmail.com> * Docs fixes for v0.22.0 release (#2312) * Fix Operator docstring hyperrefs * Fix example for top-level matrix function * Add example to Snapshot op docstring * Fix tape drawing examples in docs * Apply suggestions from code review * Update pennylane/ops/snapshot.py Co-authored-by: Christina Lee <christina@xanadu.ai> * Extend the conditional operations documentation (#2294) * Add qfunc and else to cond's UsageDetails * copy when inverting MV under the hood; add equivalent test case for inversion; add err msg when calling == of MV with unexpected typed obj; more examples * format * test docstr * format * correct examples * format * docstring * have #2300 on rc too * lambda example * intro extend, docstring * changelog PR num * link * note update * updates * Apply suggestions from code review * updates Co-authored-by: Christina Lee <christina@xanadu.ai> * Add `qml.generator(op)` backwards compatibility (#2305) * Add qml.generator(op) backwards compatibility * Apply suggestions from code review Co-authored-by: David Wierichs <davidwierichs@gmail.com> * fix docstring Co-authored-by: David Wierichs <davidwierichs@gmail.com> * fixed docs * use better function refs * I think I have solved the issue * removed prints * clean up * more cleaning * pin pennylane-lightning version in CI (#2318) * Added permutation on probability vector * Finally fixed issue, just need to add tests and format * general clean up * More clean up * added comment explaining prob vect permutation * added tests for new device method get_ordered_subset() * Update .github/workflows/tests.yml * Update pennylane/_qubit_device.py * Added more tests * Lint * Added changelog entry, fixed up tests * Added PR link to changelog * lint * fixed tests * typo in teests * Fixed get_ordered_subset() to raise a more useful error message * regex match error in tests * Update doc/releases/changelog-dev.md * Update pennylane/_device.py Co-authored-by: antalszava <antalszava@gmail.com> * Update doc/releases/changelog-dev.md * Apply suggestions from code review Co-authored-by: antalszava <antalszava@gmail.com> * Update pennylane/_device.py Co-authored-by: antalszava <antalszava@gmail.com> * Renamed device method for getting ordered subset * update name in qubit device * Wrapped permuting logic into private method * updated tests using pytest.parameterize * moved tests to device test suite * Moved tests to device test suite * added doc string to tests * lint and added check for type of mapped wires * updated device test to use tol as a func instead of a val and run on proper device * typo in tests * address codefactor * more code factor * codefactor * override global variable name * lint * Apply suggestions from code review Co-authored-by: antalszava <antalszava@gmail.com> * format * updated comment in _permute_wires func * updated doc strings in test_measaurements.py Co-authored-by: David Ittah <dime10@users.noreply.github.com> Co-authored-by: antalszava <antalszava@gmail.com> Co-authored-by: David Wierichs <davidwierichs@gmail.com> Co-authored-by: Josh Izaac <josh146@gmail.com> Co-authored-by: Jack Y. Araz <jackaraz@gmail.com> Co-authored-by: Tom Bromley <49409390+trbromley@users.noreply.github.com> Co-authored-by: anthayes92 <34694788+anthayes92@users.noreply.github.com> Co-authored-by: Nathan Killoran <co9olguy@users.noreply.github.com> Co-authored-by: Christina Lee <christina@xanadu.ai> Co-authored-by: Maria Schuld <mariaschuld@gmail.com>
* Fix output shape of batch transforms Remove squeezing from current batch transforms outputs like gradient_transform and hessian_transform, and instead produce the same output shape that is generated in qml.QNode directly at the batch_transform level. * Do contract with cjac=[1] to remove unit dimension * Fix var_param_shift iterating over 0d array * Fix extra dimension in var_param_shift Due to the mask always being 2d, there is an extra dimension for scalar-valued QNodes, even after contraction with the classical jacobian. Adjust the mask's shape to match that of the arrays holding the result values. * Fix mitigate processing_fn * Fix tests expecting unit dimensions * Fix dimensionality in param_shift_cv * Fix metric tensor tape processing * [Bug] Exclude Snapshot from adjoint backwards pass (#2289) * Exclude Snapshot from adjoint backwards pass * Add snapshots test for diff_methods * Changelog * Trigger rebuild Co-authored-by: antalszava <antalszava@gmail.com> * Work on consistency of `Operator`s (#2287) * some inconsistencies * swap basis * undo duplicated wire in test * changelog * revert snapshot wires change * unused import * merge rc * revert accidental changelog merge Co-authored-by: Josh Izaac <josh146@gmail.com> * Batch run (#2069) * batching ability for non-trainable inputs only following issue #2037 This function creates multiple circuit executions for batched input examples and executes all batch inputs with the same trainable variables. The main difference between the proposed version in the issue and this commit is the input `argnum` this indicates the location of the given input hence gives the ability to work across platforms. * adaptation for batch execution * improvements according to PR rules * minor update according to PR errors * modify according to codefactor-io * reformatted code style * adjust line lenght for linting * update linting * disable linting for too many arguments * add testing for batch input in keras * format test_keras.py * add tests for remaining functions * adapt the defaults * update docstring according to @josh146 's suggestions * remove keras sterilazation * add batch_input to the docstring * docstring update for readability: pennylane/transforms/batch_input.py Co-authored-by: Josh Izaac <josh146@gmail.com> * minor fix in documentation * change assertion error to valueerror * test valueerror * modify the definition of argnum * change argnum -> batch_idx * update changelog-dev.md * apply @josh146 's suggestions * linting * tests * more Co-authored-by: antalszava <antalszava@gmail.com> Co-authored-by: Josh Izaac <josh146@gmail.com> * Fix batch execution documented return type * Remove obsolete safe_squeeze * Circuit cutting: Tidy up documentation (#2279) * Redo imports * Update docs * Update wording * Fix ID * Add to docs * Add to docs * Fix * Update docstrings * Use nx.MultiDiGraph * Update fragment_graph * Update graph_to_tape * Update remap_tape_wires * Rename to expand_fragment_tape * Update expand_fragment_tape * Update CutStrategy * Update qcut_processing_fn * Remove note * Update cut_circuit * Work on docs * Add to docs * Update pennylane/transforms/qcut.py * Add to changelog * Move device definition * Mention WireCut * Move details * QCut module * Fix image location * Fix init * Apply suggestions from code review Co-authored-by: anthayes92 <34694788+anthayes92@users.noreply.github.com> * Add link to communication graph * Reword * Move around * fix * fix Co-authored-by: antalszava <antalszava@gmail.com> Co-authored-by: Josh Izaac <josh146@gmail.com> Co-authored-by: anthayes92 <34694788+anthayes92@users.noreply.github.com> * Circuit cutting: update changelog (#2290) * Redo imports * Update docs * Update wording * Fix ID * Add to docs * Add to docs * Fix * Update docstrings * Use nx.MultiDiGraph * Update fragment_graph * Update graph_to_tape * Update remap_tape_wires * Rename to expand_fragment_tape * Update expand_fragment_tape * Update CutStrategy * Update qcut_processing_fn * Remove note * Update cut_circuit * Work on docs * Add to docs * Update pennylane/transforms/qcut.py * Add to changelog * Move device definition * Mention WireCut * Move details * QCut module * Fix image location * Fix init * Update changelog * Link to docs page * Update wording * Apply suggestions from code review Co-authored-by: Josh Izaac <josh146@gmail.com> * Move * Update doc/releases/changelog-dev.md Co-authored-by: Nathan Killoran <co9olguy@users.noreply.github.com> * Remove * Update Co-authored-by: antalszava <antalszava@gmail.com> Co-authored-by: Josh Izaac <josh146@gmail.com> Co-authored-by: Nathan Killoran <co9olguy@users.noreply.github.com> * Minor gradient fixes (#2299) * Produce consisten output shapes In the absence of trainable params, some gradient transforms did not produce an empty tuple yet like the rest of our functions. * Minor formatting changes in param_shift_hessian * Fix param_shift_hessian for all zero diff_methods * Fix missing requires_grad & catch expected warning * Changelog Co-authored-by: Jay Soni <jbsoni@uwaterloo.ca> * Deprecate jacobian tape (#2306) * Deprecate the Jacobian tape * Deprecate tape subclasses * changelog * more test fixes * tests * Apply suggestions from code review Co-authored-by: antalszava <antalszava@gmail.com> Co-authored-by: antalszava <antalszava@gmail.com> * `qml.generator` doc fixes (#2309) * generator doc fixes * more fixing * Move tape result squeezing to each transform The previous placement of the squezing inside the batch transform assumes that all uses of the batch transform are similar to that of the gradient transforms, which is not the case. To avoid breaking other type of transforms with this change, the squeezing is now placed inside each gradient transform. Tapes constructed from QNodes now also carry the `_qfunc_output` attribute. * Fix linting errors * Snapshot: remove temporary fixes for lightning device (#2291) * Remove temp fixes for lightning * Include diff_method tests for all devices * Changelog * Update CI to use pennylane-lightning dev Co-authored-by: antalszava <antalszava@gmail.com> * Don't stack the tape result list, only each squeeze each element * Fix `_qfunc_output` missing from expanded tape * Revert "Fix metric tensor tape processing" This reverts commit 03479ac. * Fix issue when stacking scalars The `np.stack` cannot deal with scalar arrays, which can just be skipped in such cases. This situation can occur in the gradient transforms when the output is a scalar array of type object. * Docs fixes for v0.22.0 release (#2312) * Fix Operator docstring hyperrefs * Fix example for top-level matrix function * Add example to Snapshot op docstring * Fix tape drawing examples in docs * Apply suggestions from code review * Update pennylane/ops/snapshot.py Co-authored-by: Christina Lee <christina@xanadu.ai> * Extend the conditional operations documentation (#2294) * Add qfunc and else to cond's UsageDetails * copy when inverting MV under the hood; add equivalent test case for inversion; add err msg when calling == of MV with unexpected typed obj; more examples * format * test docstr * format * correct examples * format * docstring * have #2300 on rc too * lambda example * intro extend, docstring * changelog PR num * link * note update * updates * Apply suggestions from code review * updates Co-authored-by: Christina Lee <christina@xanadu.ai> * Add `qml.generator(op)` backwards compatibility (#2305) * Add qml.generator(op) backwards compatibility * Apply suggestions from code review Co-authored-by: David Wierichs <davidwierichs@gmail.com> * fix docstring Co-authored-by: David Wierichs <davidwierichs@gmail.com> * fixed docs * use better function refs * Remove `safe_squeeze` tests * pin pennylane-lightning version in CI (#2318) * Amend docstring examples for `compute_matrix` and `compute_eigvals` (#2314) * array() * print() to get the output formatting correct * revert array() * print() Co-authored-by: Maria Schuld <mariaschuld@gmail.com> Co-authored-by: Josh Izaac <josh146@gmail.com> * Few docstring updates in prep for `v0.22.0` (#2311) * updates * - lower * updates * contract_tensor ref updates * rename test file: batch_input * explicit requires_grad upon param generation * torch.Tensor as type * Update pennylane/transforms/__init__.py Co-authored-by: Tom Bromley <49409390+trbromley@users.noreply.github.com> * use tf in example Co-authored-by: Josh Izaac <josh146@gmail.com> Co-authored-by: Tom Bromley <49409390+trbromley@users.noreply.github.com> * Support for controlled & adjoint in Snapshot/Barrier (#2315) * controlled, adjoint * Remove print * Match adjoint signature of parent class * Add tests for ctrl/adj support * Update Barrier.adjoint signature * Add ctrl test for Barrier * Update tests/test_debugging.py * Update tests/ops/test_snapshot.py * Update tests/ops/test_snapshot.py * changelog * trigger build Co-authored-by: Antal Szava <antalszava@gmail.com> * ControlledQubitUnitary should raise `DecompositionUndefinedError` (#2320) * DecompositionUndefinedError * changelog * trigger check * `v0.22.0` release notes (#2303) * version * log ref * rename * sections; emojis * format * improvements order * format * addition; collabs; v0.21.0 collab alphabet fix * reorder * collab; deprecation item * more PRs; collab list extended * update * sections * op section break up * correct matrix example * suggestions * suggestions * a few more * fix typo in code * update * no tf import * update Co-authored-by: Josh Izaac <josh146@gmail.com> * Pin Lightning `>=0.22` (#2324) * pin lightning >=0.22 * Update tests.yml Co-authored-by: Josh Izaac <josh146@gmail.com> * Review: comment fixes Co-authored-by: Josh Izaac <josh146@gmail.com> * Review: simplify code Co-authored-by: David Wierichs <davidwierichs@gmail.com> * Add tests demonstrating bug resolution * Changelog * Undo difficult to test change for CV device Co-authored-by: antalszava <antalszava@gmail.com> Co-authored-by: David Wierichs <davidwierichs@gmail.com> Co-authored-by: Josh Izaac <josh146@gmail.com> Co-authored-by: Jack Y. Araz <jackaraz@gmail.com> Co-authored-by: Tom Bromley <49409390+trbromley@users.noreply.github.com> Co-authored-by: anthayes92 <34694788+anthayes92@users.noreply.github.com> Co-authored-by: Nathan Killoran <co9olguy@users.noreply.github.com> Co-authored-by: Jay Soni <jbsoni@uwaterloo.ca> Co-authored-by: Christina Lee <christina@xanadu.ai> Co-authored-by: Maria Schuld <mariaschuld@gmail.com>
Context:
In classical ML applications, training examples are run in batches and then the mean or sum of an objective function upon this batch is calculated to apply gradient descent. This function allows the collective execution of a batched input tape with the same weight used for the entire batch.
Description of the Change:
Additional transformation function.
Benefits:
A batch of training or validation examples will run at the same time. This will also allow collective job submissions to IBM-Q instead of one circuit tape at a time.
Possible Drawbacks:
Code assumes that the arguments are ordered with respect to the batched inputs first and then non-batched inputs. If the user requests any other order this will lead to wrong results.
Related GitHub Issues:
Improvement request mentioned in issue #2037