-
Notifications
You must be signed in to change notification settings - Fork 192
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Cache entrypoints in group #3622
Cache entrypoints in group #3622
Conversation
030f1b3
to
b6fe7da
Compare
aiida/plugins/entry_point.py
Outdated
@@ -50,6 +50,28 @@ class EntryPointFormat(enum.Enum): | |||
MINIMAL = 3 | |||
|
|||
|
|||
class EntryPointCache(): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think this might be equivalent to adding the @functools.lru_cache(maxsize=None)
decorator to the get_entry_points
function.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Good point @greschd , yes this would be a better way to do it
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks, that seems to be much more elegant
60c9114
to
3f0a43b
Compare
Entry point loading is already cached at the `reentry` level but getting all entry points within a group can still take significant amount of time. This commit introduces a simple cache at the AiiDA level. An alternative would be to add a cache at the reentry level. To provide some context: The timings on a query for 300 Dict nodes are as follows: * No cache: ~110 ms * Cache: 67ms * Cache at `load_node_class` level: 58ms
3f0a43b
to
e51e7d8
Compare
The
There is a C implementation of the cache: https://pypi.org/project/fastcache/ |
So you want to go with the |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks great
Entry point loading is already cached at the
reentry
level but gettingall entry points within a group can still take significant amount of
time.
This commit introduces a simple cache at the AiiDA level. An alternative
would be to add a cache at the reentry level.
To provide some context: The timings on a query for 300 Dict nodes are
as follows:
load_node_class
level: 58ms@muhrin I would say the savings are significant and probably worth reaping. Of course, one could also try to do this at the reentry side...
Let me know. If you think we keep the cache here, I'll add the proper docstrings etc.