Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

what is latest solution for Lwma? #73

Closed
mraksoll4 opened this issue Mar 3, 2023 · 6 comments
Closed

what is latest solution for Lwma? #73

mraksoll4 opened this issue Mar 3, 2023 · 6 comments

Comments

@mraksoll4
Copy link

What is latest solution for Lwma?
And was setting is correct for T 60 and N 90?

Latest bitcoin sources.

@mraksoll4
Copy link
Author

mraksoll4 commented Mar 3, 2023

in latest source code of BTC exist

https://github.com/bitcoin/bitcoin/blob/master/src/chain.h#L24

https://github.com/bitcoin/bitcoin/blob/master/src/chain.h#L40

https://github.com/bitcoin/bitcoin/blob/master/src/timedata.h#L16

what wll be correct for T=60 N=90 ?

i was think about settings

static constexpr int64_t MAX_FUTURE_BLOCK_TIME = 5 * 60;

static const int64_t DEFAULT_MAX_TIME_ADJUSTMENT = 3 * 50;

static constexpr int64_t MAX_BLOCK_TIME_GAP = 4 * 60;

and at latest BTC source exist this

https://github.com/bitcoin/bitcoin/blob/master/src/pow.cpp#L76

https://github.com/bitcoin/bitcoin/blob/master/src/headerssync.cpp#L189

https://github.com/bitcoin/bitcoin/blob/master/src/headerssync.cpp#L237

@zawy12
Copy link
Owner

zawy12 commented Mar 4, 2023

I've decided on much longer averaging windows because I could never determine there was a benefit to faster averaging periods when there is on-off mining. Longer windows make difficulty smoother for everyone.

Just now I edited the code below to recommend much larger N values. The code is a modification of Bitcoin Gold to correct an error I made way back then in directing their dev and to make it easier to understand what's happening than the way he implemented the constants.

For T=60 in a small coin I use would use N=360 which is a 6 hour averaging window with a StdDev of error of 1/SQRT(N) = 5.2%. See comments I added to the code to explain the consequences better:

#3 (comment)

OK, using my rule that FTL (MAX_FUTURE_BLOCK_TIME) needs to be 1/20 of T*N to limit difficulty manipulation to 5% and "Peer Time" (DEFAULT_MAX_TIME_ADJUSTMENT) needs to be 1/2 of that, I get:

static constexpr int64_t MAX_FUTURE_BLOCK_TIME = BLOCK_TIME * N / 20;
static const int64_t DEFAULT_MAX_TIME_ADJUSTMENT = MAX_FUTURE_BLOCK_TIME / 2;
static constexpr int64_t MAX_BLOCK_TIME_GAP = BLOCK_TIME * 12;

I think it's important to make everything possible a function of the params files like I've done above so that future changes to the constants in the params files don't cause a surprise to unsuspecting future devs.

This last constant is based on my interpretation of what I think it does. I think it just determines when you get a message to warn you're not synced. It's rare to have solvetimes taking longer than 9 block times, so for BTC T=600, they chose 90 minutes. The pull request link above the setting says they increased it from 6 to 9 block_times (60 to 90 minutes) so that the message occurs less often. If it's been 7 or 8 block_times since last block, then they won't automatically get the warning that says they're not synced. GM said 9 means the message would occur and yet be a false warning (they're actually synced) 6.4 times a year, so with T=60 instead of 600, you would expect 64 times a year if everything's the same. So I decided 9 was too low for T=60 and set it to 12. It's only 3 extra block times, but it will be a very rare event if hashrate doesn't vary much. 12 is extremely rare but small coins will vary in hashrate more than BTC.

Concerning that validation code section below BTC's difficulty algorithm, I tweeted this.

@mraksoll4
Copy link
Author

mraksoll4 commented Mar 4, 2023

i recieved your mail , about that code validation header code that realy strange why they simple not validate with algo.

  • validation with algo alrady exist , and that can make unplaned softfork if miners use old nodes and there will be huge diff drop / up , what can hapen , if some contry ban mining as example , like was at china.

about js vatinat , i simple try convert code at GPT chat , but not tested yet and not look if it correct 99% it some where will be incorrect :)

function Lwma3CalculateNextWorkRequired(pindexLast, params) {
    const T = params.nPowTargetSpacing;

    const N = params.lwmaAveragingWindow;

    const k = N * (N + 1) * T / 2;

    const height = pindexLast.nHeight;
    const powLimit = UintToArith256(params.powLimit);

    if (height < N) { return powLimit.GetCompact(); }

    let avgTarget = new arith_uint256();
    let nextTarget = new arith_uint256();
    let thisTimestamp, previousTimestamp;
    let sumWeightedSolvetimes = 0, j = 0;

    const blockPreviousTimestamp = pindexLast.GetAncestor(height - N);
    previousTimestamp = blockPreviousTimestamp.GetBlockTime();

    for (let i = height - N + 1; i <= height; i++) {
        const block = pindexLast.GetAncestor(i);

        thisTimestamp = (block.GetBlockTime() > previousTimestamp) ?
            block.GetBlockTime() : previousTimestamp + 1;

        const solvetime = Math.min(6 * T, thisTimestamp - previousTimestamp);

        previousTimestamp = thisTimestamp;

        j++;
        sumWeightedSolvetimes += solvetime * j;

        const target = new arith_uint256();
        target.SetCompact(block.nBits);
        avgTarget += target.div(N).div(k);
    }
    nextTarget = avgTarget.mul(sumWeightedSolvetimes);

    if (nextTarget > powLimit) { nextTarget = powLimit; }

    return nextTarget.GetCompact();
}
Note: The arith_uint256 class used in the original C++ code is not a part of the standard JavaScript language, so I assume it is defined elsewhere in the code. In the JavaScript code above, I have used new arith_uint256() to initialize avgTarget and nextTarget.

or

function Lwma3CalculateNextWorkRequired(pindexLast, params) {
    const BigNumber = require('bignumber.js');
    const T = params.nPowTargetSpacing;

    const N = params.lwmaAveragingWindow;

    const k = new BigNumber(N).times(N + 1).times(T).div(2);

    const height = pindexLast.nHeight;
    const powLimit = new BigNumber(params.powLimit);

    if (height < N) { return powLimit.toFixed(); }

    let avgTarget = new BigNumber(0);
    let nextTarget = new BigNumber(0);
    let thisTimestamp, previousTimestamp;
    let sumWeightedSolvetimes = new BigNumber(0), j = new BigNumber(0);

    const blockPreviousTimestamp = pindexLast.GetAncestor(height - N);
    previousTimestamp = blockPreviousTimestamp.GetBlockTime();

    for (let i = height - N + 1; i <= height; i++) {
        const block = pindexLast.GetAncestor(i);

        thisTimestamp = (block.GetBlockTime() > previousTimestamp) ?
            block.GetBlockTime() : previousTimestamp.plus(1);

        const solvetime = new BigNumber(Math.min(6 * T, thisTimestamp - previousTimestamp));

        previousTimestamp = thisTimestamp;

        j = j.plus(1);
        sumWeightedSolvetimes = sumWeightedSolvetimes.plus(solvetime.times(j));

        const target = new BigNumber(block.nBits);
        avgTarget = avgTarget.plus(target.div(N).div(k));
    }
    nextTarget = avgTarget.times(sumWeightedSolvetimes);

    if (nextTarget.gt(powLimit)) { nextTarget = powLimit; }

    return nextTarget.toFixed();
}
In this updated code, I have used require('bignumber.js') to import the bignumber.js library and create BigNumber instances to perform arithmetic operations. Note that some operations like plus and times replace the operators + and * used in the C++ code. Also, the toFixed() method is used to return the result as a string.

@mraksoll4
Copy link
Author

i have qustion about n=360 , for 60s

Isn't 6 hours too much?

@zawy12
Copy link
Owner

zawy12 commented Mar 8, 2023

It makes the difficulty nice and smooth and I never noticed a problem from making it long. For a very long time I was biased towards fast response times and everyone followed my advice. But I was basing it on coins having terrible problems including getting stuck if they kept monero's algorithm which has a 1-day average. What I didn't realize was that monero does 2 or 3 things in addition to being a simple moving average that were causing most of the problem. It has a lag and something else that I think throws out the extremes, and it might do a sorting. The big problems were caused by delaying the adjustments and rejecting the most recent solvetimes. N=360 is probably going to make it the best LWMA currently out there due to my incorrect old recommendations. Also, it was only when the 3rd really smart person said "you probably need the windows a lot longer" that I finally realized my mistake.

@mraksoll4
Copy link
Author

the question is whether it makes sense to update the algorithm in an existing coin ( not new ) to a new version or just fix the time variables, and N .

since in the comments there is mainly protection of new coins from crutches ( like use old native algo for first N blocks etc... )

old what used now.

// LWMA for BTC clones
// Copyright (c) 2017-2018 The Bitcoin Gold developers
// Copyright (c) 2018 Zawy (M.I.T license continued)
// Algorithm by zawy, a modification of WT-144 by Tom Harding
// Code by h4x3rotab of BTC Gold, modified/updated by zawy
// https://github.com/zawy12/difficulty-algorithms/issues/3#issuecomment-388386175
//  FTL must be changed to 300 or N*T/20 whichever is higher.
//  FTL in BTC clones is MAX_FUTURE_BLOCK_TIME in chain.h.
//  FTL in Ignition, Numus, and others can be found in main.h as DRIFT.
//  FTL in Zcash & Dash clones need to change the 2*60*60 here:
//  if (block.GetBlockTime() > nAdjustedTime + 2 * 60 * 60)
//  which is around line 3450 in main.cpp in ZEC and validation.cpp in Dash

unsigned int LwmaCalculateNextWorkRequired(const CBlockIndex* pindexLast, const Consensus::Params& params)
{
    const int64_t T = params.nPowTargetSpacing;
    // N=45 for T=600.  N=60 for T=150.  N=90 for T=60.
    const int64_t N = params.nZawyLwmaAveragingWindow;
    const int64_t k = N*(N+1)*T/2; // BTG's code has a missing N here. They inserted it in the loop
    const int height = pindexLast->nHeight;
    assert(height > N);

    arith_uint256 sum_target;
    int64_t t = 0, j = 0, solvetime;

    // Loop through N most recent blocks.
    for (int i = height - N+1; i <= height; i++) {
        const CBlockIndex* block = pindexLast->GetAncestor(i);
        const CBlockIndex* block_Prev = block->GetAncestor(i - 1);
        solvetime = block->GetBlockTime() - block_Prev->GetBlockTime();
        solvetime = std::max(-6*T, std::min(solvetime, 6*T));
        j++;
        t += solvetime * j;  // Weighted solvetime sum.
        arith_uint256 target;
        target.SetCompact(block->nBits);
        sum_target += target / (k * N); // BTG added the missing N back here.
    }
    // Keep t reasonable to >= 1/10 of expected t.
    if (t < k/10 ) {   t = k/10;  }
    arith_uint256 next_target = t * sum_target;

    const arith_uint256 pow_limit = UintToArith256(params.powLimit);
    if (next_target > pow_limit) {
        next_target = pow_limit;
    }

    return next_target.GetCompact();
}

with N = 90

@zawy12 zawy12 closed this as completed May 7, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants