BigDecimal serializer memory and throughput optimizations #1014
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
This is an optimization of the default serializer for
BigDecimal
. Main goal is to avoid constructingBigDecimal
in the inflated form during deserialization to save on memory and GC pressure of the program that did the deserialization. Avoiding constructing it with aBigInteger
unscaled value is possible if the unscaled value fits in along
. More reasoning behind this can be found in my article: https://medium.com/@gdela/hidden-memory-effects-of-bigdecimal-caa0bfdb1e87A nice side effect of this is improved throughput of serialization/deserialization for the cases where unscaled value fits in a
long
, i.e. if there are less than ~19 precision digits in theBigDecimal
:The changes in the
BigDecimalSerializer
do not change how the serialized form ofBigDecimal
looks like. They are backwards-compatible.