-
Notifications
You must be signed in to change notification settings - Fork 1.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Change burning man capping algorithm #6949
Change burning man capping algorithm #6949
Conversation
Add an inner class to 'BurningManServiceTest' to test the cap shares & (capped + uncapped) burn shares allocated to each candidate found upon calling 'BurningManService.getBurningManCandidatesByName'. To this end, set up mocks of 'DaoStateService' and the other injected dependencies of 'BurningManService' to return skeletal proof-of-burn & compensation issuance txs & payloads for the test users supplied for each case. Ensure the test cases exercise the capping algorithm fairly thoroughly, which is to be changed in the proceeding commits, per the proposal: bisq-network/proposals#412 In particular, provide test cases where more than two capping rounds would be applied by the new algorithm, as opposed to the current algorithm which always applies at most two. (This leads to strictly lower shares for the Legacy Burning Man and non-strict increases in the shares of all the other burning men.)
Add a missing test case for burning man candidates with fully expired compensation issuance or proofs-of-burn, or who are inactive because they have never carried out a proof-of-burn, to prevent regressions in the forthcoming capping algorithm code changes. Also add missing assertions for the expected 'adjustedBurnAmountShare' property of each candidate to the test cases. Note that these only affect the UI, including the displayed burn targets, not the actual BM revenue (DPT outputs or fee recipient selection), so we use approximate equality in the test assertions as usual for floating point expressions.
Replace HashMap in 'BurningManService.getBurningManCandidatesByName' result construction with a TreeMap, to ensure that the map values are ordered deterministically (alphabetically by candidate name) when computing floating point sums. The map values are streamed over in a few places in this method and elsewhere in 'DelayedPayoutTxReceiverService', when performing double precision summation to compute the DPT. This introduces potential nondeterminism due to the nonassociativity of FP addition, making sums dependent on the term order. (Note that 'DoubleStream::sum' uses compensated (Kahan) summation, which makes it effectively quad precision internally, so the chance of term reordering causing the result to differ by even a single ULP is probably quite low here. So there might not be much problem in practice.)
Change the algorithm used to adjust & cap the burn share of each BM candidate to use an unlimited number of 'rounds', as described in: bisq-network/proposals#412 That is, instead of capping the shares once, then distributing the excess to the remaining BM, then capping again and giving any excess to the Legacy Burning Man, we cap-redistribute-cap-redistribute-... an unlimited number of times until no more candidates are capped. This has the effect of reducing the LBM's share and increasing everyone else's, alleviating the security risk of giving too much to the LBM (who is necessarily uncapped). Instead of implementing the new algorithm directly, we simply enlarge the set of candidates who should be capped to include those who would eventually be capped by the new algorithm, in order to determine how much excess burn share should go to the remaining BM. Then we apply the original method, 'candidate.calculateCappedAndAdjustedShares(..)', to set each share to be equal to its respective cap or uniformly scaled upwards from the starting amount accordingly. To this end, the static method 'BurningManService.imposeCaps' is added, which determines which candidates will eventually be capped, by sorting them in descending order of burn-share/cap-share ratio, then marking all the candidates in some suitable prefix of the list as capped, iterating through them one-by-one & gradually increasing the virtual capping round (starting at zero) until the end of the prefix is reached. (The method also determines what the uncapped adjusted burn share of each BM should be, but that only affects the BM view & burn targets.) In this way, the new algorithm runs in guaranteed O(n * log n) time. To prevent failed trades, the new algorithm is set to activate at time 'DelayedPayoutTxReceiverService.PROPOSAL_412_ACTIVATION_DATE', with a placeholder value of 12am, 1st January 2024 (UTC). This simply toggles whether the for-loop in 'imposeCaps' should stop after capping round 0, since doing so will lead to identical behaviour to the original code (even accounting for FP rounding errors).
Since the 'MockitoAnnotations.initMocks' method used to initialise @mock, @SPY, etc. fields is deprecated, use MockitoSession instead, as suggested by the Javadoc. (Or just remove the call, if the test class is not actually using any Mockito annotations.) This allows strict stubbing (the default for Mockito 4), though it must be configured to be lenient in the P2P test classes, to prevent failures due to unused stubs.
I've just pushed another fairly minor commit to this branch, to clean up the test classes a little by replacing the deprecated |
@alvasw @HenrikJannsen @jmacxx Could anyone of you review this PR please? Thanks! |
I am traveling at but will review as soon I find time. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
utACK
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
utACK
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
utACK
Change the algorithm used to adjust & cap the burn share of each BM candidate to use an unlimited number of 'rounds', as described in bisq-network/proposals#412. That is, instead of capping the shares once, then distributing the excess to the remaining BM, then capping again and giving any excess to the Legacy Burning Man, we cap-redistribute-cap-redistribute-... an unlimited number of times until no more candidates are capped. This has the effect of reducing the LBM's share and increasing everyone else's, alleviating the security risk of giving too much to the LBM (who is necessarily uncapped).
Implementation details
Instead of implementing the new algorithm directly, we simply enlarge the set of candidates who should be capped to include those who would eventually be capped by the new algorithm, in order to determine how much excess burn share should go to the remaining BM. Then we apply the original method,
candidate.calculateCappedAndAdjustedShares(..)
, to set each share to be equal to its respective cap or uniformly scaled upwards from the starting amount accordingly.To this end, the static method
BurningManService.imposeCaps
is added, which determines which candidates will eventually be capped, by sorting them in descending order of burn-share/cap-share ratio, then marking all the candidates in some suitable prefix of the list as capped, iterating through them one-by-one & gradually increasing the virtual capping round (starting at zero) until the end of the prefix is reached. (The method also determines what the uncapped adjusted burn share of each BM should be, but that only affects the BM view & burn targets.) In this way, the new algorithm runs in guaranteed O(n * log n) time.Activation date
To prevent failed trades, the new algorithm is set to activate at time
DelayedPayoutTxReceiverService.PROPOSAL_412_ACTIVATION_DATE
, with a placeholder value of 12am, 1st January 2024 (UTC). This simply toggles whether the for-loop inimposeCaps
should stop after capping round 0, since doing so will lead to identical behaviour to the original code (even accounting for FP rounding errors).