Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Mark the kernel test not converging as xfail #1918

Merged
merged 7 commits into from
Nov 19, 2021
Merged

Conversation

antalszava
Copy link
Contributor

Certain runs of a particular kernels module test seem to result in failures due to no convergence. The issue seems to be related to using particular versions of cvxpy and cvxopt (previously this issue came up when one was not installed).

@github-actions
Copy link
Contributor

Hello. You may have forgotten to update the changelog!
Please edit doc/releases/changelog-dev.md with:

  • A one-to-two sentence description of the change. You may include a small working example for new features.
  • A link back to this PR.
  • Your name (or GitHub username) in the contributors section.

Copy link
Contributor

@dime10 dime10 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks @antalszava, hopefully that prevents the CI from sporadically failing in the future.

import cvxpy as cp

output = kern.closest_psd_matrix(input, fix_diagonal=fix_diagonal, feastol=1e-10)
except cp.error.SolverError:
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think this line is a problem if cvxpy fails to import no?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Now there's a fixture that skips the test if cvxpy is not installed.

@codecov
Copy link

codecov bot commented Nov 19, 2021

Codecov Report

Merging #1918 (e399589) into master (b945fec) will not change coverage.
The diff coverage is n/a.

Impacted file tree graph

@@           Coverage Diff           @@
##           master    #1918   +/-   ##
=======================================
  Coverage   98.83%   98.83%           
=======================================
  Files         222      222           
  Lines       16979    16979           
=======================================
  Hits        16781    16781           
  Misses        198      198           

Continue to review full report at Codecov.

Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update b945fec...e399589. Read the comment docs.

@antalszava antalszava merged commit 9cf401d into master Nov 19, 2021
@antalszava antalszava deleted the kernel_solver_xfail branch November 19, 2021 21:09
antalszava added a commit that referenced this pull request Nov 23, 2021
* mark the kernel test not converging as xfail

* format

* skipif

* skipif marker

* format
antalszava added a commit that referenced this pull request Nov 25, 2021
* torch coerce change, qml.math for Hermitian

* test changes

* readd torch_device kwarg

* format

* use torch device arg and add cuda_was_set branch

* polish

* TestApply cover with torch_device; update the def for OrbitalRotation to be compatible with CUDA

* Add notes to tests using execute; CRX + CRY convert_like; Integration tests passing

* passthru test

* more tests

* format

* coerce refine + test

* tidy

* branch off for Torch

* def self._torch_device_requested and extend check in execute

* test and further adjustments

* format

* changelog

* Mark the kernel test not converging as xfail (#1918)

* mark the kernel test not converging as xfail

* format

* skipif

* skipif marker

* format

* changelog

* lint

* todo no longer relevant

* PauliRot identity matrix consider Torch

* format

* paulirot identity matrix test

* add gpu marker

* more torch_device usage

* more fixes with finite diff; update tests

* qml.math.allclose

* Move the state to device conversion from apply_operation to the execute method because it logically belongs there better

* Move the state to device conversion within if statement, add comment; change a test to avoid raising a warning

* format

* error message; gpu marker on the coercion test in qml.math; error matching

* format

* warning

* warning test

* no print

* format
josh146 added a commit that referenced this pull request Nov 29, 2021
* extend the changelog item of the init module deprecation (#1833)

* add sorting to `get_parameters`

* undo accidental direct push to v0.19.0-rc0

* Remove template decorator from PennyLane functions (#1808) (#1835)

* Remove template decorator.

* Correct import and changelog.

* Change interferometer.

* Update pennylane/templates/broadcast.py

Co-authored-by: antalszava <antalszava@gmail.com>

* Update doc/releases/changelog-dev.md

Co-authored-by: antalszava <antalszava@gmail.com>

* Update pennylane/templates/broadcast.py

Co-authored-by: antalszava <antalszava@gmail.com>

* Update from review.

* More import removed.

* Update parametric ops.

* Update pennylane/templates/subroutines/arbitrary_unitary.py

Co-authored-by: antalszava <antalszava@gmail.com>

Co-authored-by: antalszava <antalszava@gmail.com>

Co-authored-by: Romain <rmoyard@gmail.com>

* Update `QNGOptimizer` to handle deprecations (#1834)

* pass approx option in QNG if need be

* convert to array instead of list; add recwarn to QNG test case

* Update pennylane/optimize/qng.py

Co-authored-by: Josh Izaac <josh146@gmail.com>

* add expval(H) with two input params test case

* deprecate diag_approx keyword for QNGOptimizer

* format

* docstring

* changelog

Co-authored-by: Josh Izaac <josh146@gmail.com>

* Sorted `tape.get_parameters` (#1836)

* sorted `get_parameters`

* adapt PR count

* fix metric tensor tests

* also sort in `set_parameters`

* Fix bug to ensure qml.math.dot works in autograph mode (#1842)

* Fix bug to ensure qml.math.dot works in autograph mode

* changelog

Co-authored-by: antalszava <antalszava@gmail.com>

* Update `expand_fn` (rc) (#1838)

* expand_fn

* changes just as in master

* new lines

* Ensure that the correct version of keras is installed for TensorFlow CI (#1847)

* Add deprecation warning for default.tensor (#1851)

* add warning and test

* add changelog

* Update pennylane/beta/devices/default_tensor.py

Co-authored-by: antalszava <antalszava@gmail.com>

Co-authored-by: antalszava <antalszava@gmail.com>

* fix docstrings in hf module (#1853)

* fix hf docstrings

* Update pennylane/hf/hartree_fock.py

Co-authored-by: Romain <rmoyard@gmail.com>

Co-authored-by: Romain <rmoyard@gmail.com>

* Docstring updates of new features before the release (#1859)

* doc format

* docstring example

* Hide :orphan: from displaying in the release notes (#1863)

* Update single_dispatch.py (#1857)

* Add support for abstract tensors in AmplitudeEmbedding (#1845)

* Add support for abstract tensors in AmplitudeEmbedding

* add tests

* update changelog

* Apply suggestions from code review

Co-authored-by: Romain <rmoyard@gmail.com>

* fix

* fix

* fix

* Add batchtracer support

* Update tests/math/test_is_abstract.py

Co-authored-by: antalszava <antalszava@gmail.com>

Co-authored-by: Romain <rmoyard@gmail.com>
Co-authored-by: antalszava <antalszava@gmail.com>

* Add documentation for the hf module (#1839)

* create rst file for hf module

* Documentation Updates (#1868)

* documentation updates

* Update doc/introduction/interfaces/numpy.rst

* Apply suggestions from code review

Co-authored-by: antalszava <antalszava@gmail.com>

* scipy minimize update

* collections of qnode section re-added

* new lines

Co-authored-by: antalszava <antalszava@gmail.com>

* Update deprecated use of `qml.grad`/`qml.jacobian` in docstrings (#1860)

* autograd device requires_grad

* math.quantum example

* tracker

* tracker 2nd

* batch_transform

* batch_params

* fix batch params docstring

* update

* Update pennylane/transforms/batch_params.py

Co-authored-by: antalszava <antalszava@gmail.com>

* `qml.math.is_independent` no deprecation warnings with `qml.grad`/`qml.jacobian` (#1862)

* is_independent no deprecation warning

* is_independent no deprecation warning tests

* docstring

* formatting

* mark arrayboxes as trainable still; adjust test case (no autograd warning)

* define requires_grad=True for random args only with Autograd (not JAX) to avoid errors with JAX tracers

* add in warning checks such that warnings are for sure not emitted

* Update pennylane/transforms/batch_params.py

Co-authored-by: Christina Lee <christina@xanadu.ai>

* Update pennylane/transforms/batch_params.py

Co-authored-by: Christina Lee <christina@xanadu.ai>

* Update pennylane/transforms/batch_params.py

Co-authored-by: Christina Lee <christina@xanadu.ai>

* Update pennylane/transforms/batch_params.py

Co-authored-by: Christina Lee <christina@xanadu.ai>

* Update pennylane/transforms/batch_params.py

Co-authored-by: Christina Lee <christina@xanadu.ai>

* Update pennylane/transforms/batch_params.py

Co-authored-by: Christina Lee <christina@xanadu.ai>

Co-authored-by: Josh Izaac <josh146@gmail.com>
Co-authored-by: Romain <rmoyard@gmail.com>
Co-authored-by: Christina Lee <christina@xanadu.ai>

* change example (#1875)

* `v0.19.0` release notes (#1852)

* pl + qchem versions

* draft release notes

* example + changelog-dev.md

* changelog-dev

* batch_params output

* shorten U decomp example to print well

* need orbitals kwarg when using hf_state as positional came after kwarg

* need orbitals kwarg when using hf_state as positional came after kwarg

* trainable params

* orphan

* contributors

* notes

* HF solver example

* HF solver example

* HF solver text

* doc item for #1724

* item for 1769

* doc item for #1792

* #1804, #1802

* #1824

* pyscf pin

* PR links

* resolve changelog (#1858)

Co-authored-by: agran2018 <45397799+agran2018@users.noreply.github.com>

* Update release_notes.md

* Apply HF wording suggestion from Josh

* fix batch params docstring

* update

* merge batch_params dim_output

* new shape for the output of the batch_params example

* Update doc/releases/changelog-0.19.0.md

Co-authored-by: Josh Izaac <josh146@gmail.com>

* transforms order change as suggested

* Update doc/releases/changelog-0.19.0.md

Co-authored-by: Josh Izaac <josh146@gmail.com>

* Update doc/releases/changelog-0.19.0.md

Co-authored-by: Josh Izaac <josh146@gmail.com>

* shorten ex as suggested

* shorten insert example

* move classical jac change to improvements as suggested

* Update doc/releases/changelog-0.19.0.md

Co-authored-by: Josh Izaac <josh146@gmail.com>

* Update doc/releases/changelog-0.19.0.md

Co-authored-by: Josh Izaac <josh146@gmail.com>

* Update doc/releases/changelog-0.19.0.md

Co-authored-by: Josh Izaac <josh146@gmail.com>

* move is_independent to improvements

* remove sentence as suggested

* Update doc/releases/changelog-0.19.0.md

Co-authored-by: Josh Izaac <josh146@gmail.com>

* Update doc/releases/changelog-0.19.0.md

Co-authored-by: Josh Izaac <josh146@gmail.com>

* change the order of the improvements based on the suggestions

* rephrase qml.grad/qml.jacobian deprecation date

* Update doc/releases/changelog-0.19.0.md

* no current release tag for v0.18.0

* no -dev

* fix

* fix

* correct GateFabric api link

* JAX interface as improvement

* create_expand_fn suggestion

* requires_grad.png

* GateFabric full path

Co-authored-by: agran2018 <45397799+agran2018@users.noreply.github.com>
Co-authored-by: Josh Izaac <josh146@gmail.com>
Co-authored-by: Romain <rmoyard@gmail.com>

* Tkt rc (#1870)

* changed readme and index tkt

* kept changelog

* Changed docs image

* Deleted changelog

* Apply suggestions from code review

Co-authored-by: antalszava <antalszava@gmail.com>
Co-authored-by: Nathan Killoran <co9olguy@users.noreply.github.com>

* removed readme tkt

* Removed readme

* README.md form release candidate

Co-authored-by: Catalina Albornoz <catalina@xanadu.ai>
Co-authored-by: antalszava <antalszava@gmail.com>
Co-authored-by: Nathan Killoran <co9olguy@users.noreply.github.com>

* contributors

* Update the norm check for `QubitStateVector` in `default.qubit` (v0.19.1) (#1924)

* norm check fix

* 0.19.1 changelog

* 0.19.1 bump

* pr number

* changelog for 0.19.1 in the release notes

* torch coerce change, qml.math for Hermitian

* test changes

* readd torch_device kwarg

* format

* use torch device arg and add cuda_was_set branch

* polish

* TestApply cover with torch_device; update the def for OrbitalRotation to be compatible with CUDA

* Add notes to tests using execute; CRX + CRY convert_like; Integration tests passing

* passthru test

* more tests

* format

* coerce refine + test

* tidy

* branch off for Torch

* def self._torch_device_requested and extend check in execute

* test and further adjustments

* format

* changelog

* Mark the kernel test not converging as xfail (#1918)

* mark the kernel test not converging as xfail

* format

* skipif

* skipif marker

* format

* changelog

* lint

* todo no longer relevant

* PauliRot identity matrix consider Torch

* format

* paulirot identity matrix test

* add gpu marker

* more torch_device usage

* more fixes with finite diff; update tests

* qml.math.allclose

* Move the state to device conversion from apply_operation to the execute method because it logically belongs there better

* Move the state to device conversion within if statement, add comment; change a test to avoid raising a warning

* format

* error message; gpu marker on the coercion test in qml.math; error matching

* format

* warning

* warning test

* no print

* format

* Fix parametric ops GPU compatibility (`v0.19.1`) (#1927)

* torch coerce change, qml.math for Hermitian

* test changes

* readd torch_device kwarg

* format

* use torch device arg and add cuda_was_set branch

* polish

* TestApply cover with torch_device; update the def for OrbitalRotation to be compatible with CUDA

* Add notes to tests using execute; CRX + CRY convert_like; Integration tests passing

* passthru test

* more tests

* format

* coerce refine + test

* tidy

* branch off for Torch

* def self._torch_device_requested and extend check in execute

* test and further adjustments

* format

* changelog

* Mark the kernel test not converging as xfail (#1918)

* mark the kernel test not converging as xfail

* format

* skipif

* skipif marker

* format

* changelog

* lint

* todo no longer relevant

* PauliRot identity matrix consider Torch

* format

* paulirot identity matrix test

* add gpu marker

* more torch_device usage

* more fixes with finite diff; update tests

* qml.math.allclose

* Move the state to device conversion from apply_operation to the execute method because it logically belongs there better

* Move the state to device conversion within if statement, add comment; change a test to avoid raising a warning

* format

* error message; gpu marker on the coercion test in qml.math; error matching

* format

* warning

* warning test

* no print

* format

* contributors for the bug fix release

* changelog

* Update doc/development/release_notes.md

* import order

* Update tests/devices/test_default_qubit_torch.py

* resolve conflicts

* remove _apply_operation as a whole

* use f string instead of format call to make codefactor happy

* use f string instead of format call in default.qubit to make codefactor happy

* more f strings; {} instead of dict()

* format

* PauliRot Torch run on CPU too to increase coverage

* format

Co-authored-by: dwierichs <davidwierichs@gmail.com>
Co-authored-by: Romain <rmoyard@gmail.com>
Co-authored-by: Josh Izaac <josh146@gmail.com>
Co-authored-by: Maria Schuld <mariaschuld@gmail.com>
Co-authored-by: Christina Lee <christina@xanadu.ai>
Co-authored-by: soranjh <40344468+soranjh@users.noreply.github.com>
Co-authored-by: Guillermo Alonso-Linaje <65235481+KetpuntoG@users.noreply.github.com>
Co-authored-by: agran2018 <45397799+agran2018@users.noreply.github.com>
Co-authored-by: CatalinaAlbornoz <albornoz.catalina@hotmail.com>
Co-authored-by: Catalina Albornoz <catalina@xanadu.ai>
Co-authored-by: Nathan Killoran <co9olguy@users.noreply.github.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants