You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I stumbled upon this when playing around with some benchmarks that read a file using zlib compression, and a sync.Pool seems to work well enough. I think I can put something together for the various Resetters, if there's an interest.
In my toy benchmark, it reduces memory allocation (in my toy test) by about 60%, with a corresponding reduction in the number of GC cycles. Sadly, it doesn't seem to improve the event rate as much as I'd have hoped. On my machine, a file that needs ~30 seconds to process is sped up to ~28 seconds (wall time). I think I'm actually bottlenecked on something else, I'll open a separate issue describing what that may be.
we should consider introducing a better buffer+reader reuse strategy (e.g. for decompressor that support
Reset
ing their readers)this would reduce memory allocation and memory pressure.
this would then reduce the amount of work needed by the garbage collector.
issues:
sync.Map
as proposed in proposal: sync, sync/atomic: add PoolOf, MapOf, ValueOf golang/go#47657The text was updated successfully, but these errors were encountered: