-
Notifications
You must be signed in to change notification settings - Fork 586
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Refactor cache_execute()
as a transform
#5318
Conversation
@albi3ro I've tagged you since you are the primary code owner but I think we should get someone else involved too. |
Thanks for the comment, @albi3ro! |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Sorry about forgetting this aspect.
We should also include a warning if we are using finite-shots, a persistent cache, and re-use a result:
pennylane/pennylane/workflow/execution.py
Line 385 in ff533d5
if tape.shots and getattr(cache, "_persistent_cache", True): |
Thanks for the updated review, @albi3ro! 🥳 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
OK I think I see what the failures are. cache=None
should be a valid kwarg to _cache_transform
, in which case we just return (tape,) , null_postprocessing
early.
Ah, sorry @albi3ro, I missed that the tests failed! In my opinion, we shouldn't support passing |
Codecov ReportAll modified and coverable lines are covered by tests ✅
Additional details and impacted files@@ Coverage Diff @@
## master #5318 +/- ##
=========================================
Coverage ? 99.66%
=========================================
Files ? 401
Lines ? 36927
Branches ? 0
=========================================
Hits ? 36802
Misses ? 125
Partials ? 0 ☔ View full report in Codecov by Sentry. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
So the code cov failures are strictly due the lack of unit tests for cache_execute
. Given we are more or less marking that function for removal, I'm fine with leaving those lines uncovered. Though this would potentially be another piece of evidence for removing cache_execute
immediately.
But the new transform is so much easier to follow than cache_execute
🎉 Thank you so much for handling this improvement :)
This comment was marked as outdated.
This comment was marked as outdated.
Thanks for the many reviews and advice, @albi3ro! |
After offline discussion, it was decided to remove |
Co-authored-by: Thomas R. Bromley <49409390+trbromley@users.noreply.github.com>
Thanks, @Mandrenkov. This looks fantastic! |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Reapproving after the removal after removal of cache_execute
.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Amazing! Thanks again @Mandrenkov
Context:
Presently, the
cache_execute()
function is difficult to understand because it assumes too many responsibilities. This PR offers an alternative implementation,_cache_transform()
, which leverages the@transform
infrastructure to provide the same functionality but in a shorter, cleaner, and more concise way.If this PR is approved, we should consider deprecating
cache_execute()
in the next release.Description of the Change:
_cache_transform()
which re-implementscache_execute()
using@transform
._apply_cache_transform()
which conveniently wraps_cache_transform()
.cache_execute()
tests to spy on_cache_transform()
instead._cache_transform()
.Benefits:
cache_execute()
.Possible Drawbacks:
cache_execute()
.Related GitHub Issues:
None.