You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Assuming a dataset isn't going to change, I would like to configure mountpoint-s3 to keep the bucket listing in RAM or stored within a file (preferred as it can be collected and recycled for later re-mount).
Right now, I need to store the listing to a file to avoid occurring the cost of listing again (up to 20min on reasonable dataset).
tchaton
changed the title
Add support for caching bucket listing
Add support for caching, exporting, importing bucket listing
Oct 11, 2023
tchaton
changed the title
Add support for caching, exporting, importing bucket listing
Add support for caching, exporting, importing bucket list
Oct 11, 2023
Interesting feature request! Serving directory listing from cache is something we've been considering as part of #255, but ultimately won't be part of the initial solution. However, the entries we discover as part of a directory listing will be included in subsequent lookup, open, etc. requests.
Leaving this open to track as a further enhancement in the future.
Not on this issue in particular. Work is on-going on #255, with metadata caching (for open, etc; but not list) available under a build-time feature flag. It isn't ready for production use yet but can be built from source for non-production testing.
Tell us more about this new feature.
Hey there,
Assuming a dataset isn't going to change, I would like to configure mountpoint-s3 to keep the bucket listing in RAM or stored within a file (preferred as it can be collected and recycled for later re-mount).
Right now, I need to store the listing to a file to avoid occurring the cost of listing again (up to 20min on reasonable dataset).
Additionally, I would love to pay the cost of listing only one ever. Ideally, I would like the ability to export and import filesystem index.
The text was updated successfully, but these errors were encountered: