Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

LWMA difficulty algorithm #3

Open
zawy12 opened this issue Dec 6, 2017 · 26 comments
Open

LWMA difficulty algorithm #3

zawy12 opened this issue Dec 6, 2017 · 26 comments

Comments

@zawy12
Copy link
Owner

zawy12 commented Dec 6, 2017

CN coins: The last test of your fork is to make sure your new difficulties when you sync from 0 are matching the old difficulties when running the pre-fork code. See this note.

FWIW, it's possible to do the LWMA without looping over N blocks, using only the first and last difficulties (or targets) and their timestamps. In terms of difficulty, I believe it's:

ts = timestamp  D_N is difficulty of most recently solved block. 
D_{N+1} = next_D
S is the previous denominator:
S = D_N / [ D_{N-2} + D_{N-1}/N - D_{-1}/N ] * k * T
k = N/2*(N+1)
D_{N+1} = [ D_{N-1} + D_N/N - D_0/N ] * T * k / 
[ S - (ts_{N-1}-ts_0) + (ts_N-ts_{N-1})*N ]

I discovered a security weakness on 5/16/2019 due to my past FTL recommendations (which prevent bad timestamps from lowering difficulty). This weakness aka exploit does not seem to apply to Monero and Cryptonote coins that use node time instead of network time. If your coin uses network time instead of node local time, lowering FTL < about 125% of the "revert to node time" rule (70 minutes in BCH, ZEC, & BTC) will allow a 33% Sybil attack on your nodes, so the revert rule must be ~ FTL/2 instead of 70 minutes. If your coin uses network time without a revert rule (a bad design), it is subject to this attack under all conditions See: zcash/zcash#4021

People like reading the history of this algorithm.

Comparing algorithms on live coins: Difficulty Watch
Send me a link to open daemon or full API to be included.

LWMA for Bitcoin & Zcash Clones

See LWMA code for BTC/Zcash clones in the comments below. Known BTC Clones using LWMA: are BTC Gold, BTC Candy, Ignition, Pigeon, Zelcash, Zencash, BitcoinZ, Xchange, Microbitcoin.

Testnet Checking
Emai me a link to your code and then send me 200 testnet timestamps and difficulties (CSV height, timestamp, difficulty). To fully test it, you can send out-of-sequence timestamps to testnet by changing the clock on your node that sends your miner the block templates. There's a Perl script in my github code that you can use to simulate hash attacks on a single-computer testnet. Here's example code for getting the CSV timestamps/difficulty data to send me:

curl -X POST http://127.0.0.1:38782/json_rpc -d '{"jsonrpc":"2.0","id":"0","method":"getblockheadersrange","params":{"start_height":300,"end_height":412}}' -H 'Content-Type: application/json' | jq -r '.result.headers[] | [.height, .timestamp, .difficulty] | @csv'

Discord
There is a discord channel for devs using this algorithm. You must have a coin and history as a dev on that coin to join. Please email me at zawy@yahoo.com to get an invite.

Donations
Thanks to Sumo, Masari, Karbo, Electroneum, Lethean, and XChange.
38skLKHjPrPQWF9Vu7F8vdcBMYrpTg5vfM or your coin if it's on TO or cryptopia.

LWMA Description
This sets difficulty by estimating current hashrate by the most recent difficulties and solvetimes. It divides the average difficulty by the Linearly Weighted Moving Average (LWMA) of the solvetimes. This gives it more weight to the more recent solvetimes. It is designed for small coin protection against timestamp manipulation and hash attacks. The basic equation is:

next_difficulty = average(Difficulties) * target_solvetime / LWMA(solvetimes)

LWMA-2/3/4 are now not recommended because I could not show they were better than LWMA-1.

LWMA-1

Use this if you do not have NiceHash etc problems.
See LWMA-4 below for more aggressive rules to help prevent NiceHash delays,

// LWMA-1 difficulty algorithm 
// Copyright (c) 2017-2018 Zawy, MIT License
// See commented link below for required config file changes. Fix FTL and MTP.
// https://github.com/zawy12/difficulty-algorithms/issues/3
// The following comments can be deleted.
// Bitcoin clones must lower their FTL. See Bitcoin/Zcash code on the page above.
// Cryptonote et al coins must make the following changes:
// BLOCKCHAIN_TIMESTAMP_CHECK_WINDOW  = 11; // aka "MTP"
// DIFFICULTY_WINDOW  = 60; //  N=60, 90, and 150 for T=600, 120, 60.
// BLOCK_FUTURE_TIME_LIMIT = DIFFICULTY_WINDOW * DIFFICULTY_TARGET / 20;
// Warning Bytecoin/Karbo clones may not have the following, so check TS & CD vectors size=N+1
// DIFFICULTY_BLOCKS_COUNT = DIFFICULTY_WINDOW+1;
// The BLOCKS_COUNT is to make timestamps & cumulative_difficulty vectors size N+1
//  If your coin uses network time instead of node local time, lowering FTL < about 125% of 
// the "revert to node time" rule (70 minutes in BCH, ZEC, & BTC) will allow a 33% Sybil attack 
// on your nodes.  So revert rule must be ~ FTL/2 instead of 70 minutes.   See: 
// https://github.com/zcash/zcash/issues/4021

difficulty_type LWMA1_(std::vector<uint64_t> timestamps, 
   std::vector<uint64_t> cumulative_difficulties, uint64_t T, uint64_t N, uint64_t height,  
					uint64_t FORK_HEIGHT, uint64_t  difficulty_guess) {
    
   // This old way was not very proper
   // uint64_t  T = DIFFICULTY_TARGET;
   // uint64_t  N = DIFFICULTY_WINDOW; // N=60, 90, and 150 for T=600, 120, 60.
   
   // Genesis should be the only time sizes are < N+1.
   assert(timestamps.size() == cumulative_difficulties.size() && timestamps.size() <= N+1 );

   // Hard code D if there are not at least N+1 BLOCKS after fork (or genesis)
   // This helps a lot in preventing a very common problem in CN forks from conflicting difficulties.
   if (height >= FORK_HEIGHT && height < FORK_HEIGHT + N) { return difficulty_guess; }
   assert(timestamps.size() == N+1); 

   uint64_t  L(0), next_D, i, this_timestamp(0), previous_timestamp(0), avg_D;

	previous_timestamp = timestamps[0]-T;
	for ( i = 1; i <= N; i++) {        
		// Safely prevent out-of-sequence timestamps
		if ( timestamps[i]  > previous_timestamp ) {   this_timestamp = timestamps[i];  } 
		else {  this_timestamp = previous_timestamp+1;   }
		L +=  i*std::min(6*T ,this_timestamp - previous_timestamp);
		previous_timestamp = this_timestamp; 
	}
	if (L < N*N*T/20 ) { L =  N*N*T/20; }
	avg_D = ( cumulative_difficulties[N] - cumulative_difficulties[0] )/ N;
   
	// Prevent round off error for small D and overflow for large D.
	if (avg_D > 2000000*N*N*T) { 
		next_D = (avg_D/(200*L))*(N*(N+1)*T*99);   
	}   
	else {    next_D = (avg_D*N*(N+1)*T*99)/(200*L);    }
	
	// Optional. Make all insignificant digits zero for easy reading.
	i = 1000000000;
	while (i > 1) { 
		if ( next_D > i*100 ) { next_D = ((next_D+i/2)/i)*i; break; }
		else { i /= 10; }
	}
	return  next_D;
}

The following is an idea that could be inserted right before "return next_D;

	// Optional.
        // Make least 2 digits = size of hash rate change last 11 BLOCKS if it's statistically significant.
	// D=2540035 => hash rate 3.5x higher than D expected. Blocks coming 3.5x too fast.
	if ( next_D > 10000 ) { 
		uint64_t est_HR = (10*(11*T+(timestamps[N]-timestamps[N-11])/2)) / 
                                   (timestamps[N]-timestamps[N-11]+1);
		if (  est_HR > 5 && est_HR < 25 )  {  est_HR=0;   }
		est_HR = std::min(static_cast<uint64_t>(99), est_HR);
		next_D = ((next_D+50)/100)*100 + est_HR;  
	}

This is LWMA-2 verses LWMA if there is a 10x attack. There's not any difference for smaller attacks. See further below for LWMA compared to other algos.
image

Credits:

  • dgenr8 for showing LWMA can work
  • Aiwe (Karbo) for extensive discussions and motivation.
  • Thaer (Masari) for jump-starting LWMA and refinement discussions.
  • BTG (h4x4rotab) for finding initial pseudocode error and writing a good clean target method.
  • gabetron for pointing out a if ST<0 then ST=0 type of exploit in 1 version before it was used by anyone.
  • CDY for pointing out target method was not exact same as difficulty method.
  • IPBC and Intense for independently suffering and fixing a sneaky but basic code error.
  • Stellite and CDY for independently modifying an idea in my D-LWMA, forking to implement it, and showing me it worked. (The one-sided jump rule). My modification of their idea resulted in LWMA-2.

Known coins using it
The names here do not imply endorsement or success or even that they've forked to implement it yet. This is mainly for my reference to check on them later.
Alloy, Balkan, Wownero, Bitcoin Candy, Bitcoin Gold, BitcoiNote, BiteCode, BitCedi, BBScoin, Bitsum, BitcoinZ(?) Brazuk, DigitalNote, Dosh, Dynasty(?), Electronero, Elya, Graft, Haven, IPBC, Ignition, Incognito, Iridium, Intense, Italo, Loki, Karbo, MktCoin, MoneroV, Myztic, MarketCash, Masari, Niobio, NYcoin, Ombre, Parsi, Plura, Qwerty, Redwind?, Saronite, Solace, Stellite, Turtle, UltraNote, Vertical, Zelcash, Zencash. Recent inquiries: Tyche, Dragonglass, TestCoin, Shield 3.0. [update: and many more]

Importance of the averaging window size, N
The size of of an algorithm's "averaging" window of N blocks is more important than the particular algorithm. Stability comes at a loss in speed of response by making N larger, and vice versa. Being biased towards low N is good because speed is proportional to 1/N while stability is proportional to SQRT(N). In other words, it's easier to get speed from low N than it is to get stability from high N. It appears as if the the top 20 large coins can use an N up to 10x higher (a full day's averaging window) to get a smooth difficulty with no obvious ill-effects. But it's very risky if a coin does not have at least 20% of the dollar reward per hour as the biggest coin for a given POW. Small coins using a large N can look nice and smooth for a month and then go into oscillations from a big miner and end up with 3-day delays between blocks, having to rent hash power to get unstuck. By tracking hashrate more closely, smaller N is more fair to your dedicated miners who are important to marketing. Correctly estimating current hashrate to get the correct block solvetime is the only goal of a difficulty algorithm. This includes the challenge of dealing with bad timestamps. An N too small disastrously attracts on-off mining by varying too much and doesn't track hashrate very well. Large N attracts "transient" miners by not tracking price fast enough and by not penalizing big miners who jump on and off, leaving your dedicated miners with a higher difficulty. This discourages dedicated miners, which causes the difficulty to drop in the next cycle when the big miner jumps on again, leading to worsening oscillations.

Masari forked to implement this on December 3, 2017 and has been performing outstandingly.
Iridium forked to implement this on January 26, 2018 and reports success. They forked again on March 19, 2018 for other reasons and tweaked it.
IPBC forked to implement it March 2, 2018.
Stellite implemented it March 9, 2018 to stop bad oscillations.
Karbowanec and QwertyCoin appear to be about to use it.

Comparison to other algorithms:

The competing algorithms are LWMA, EMA (exponential moving average), and Digishield. I'll also include SMA (simple moving average) for comparison. This is is the process go through to determine which is best.

First, I set the algorithms' "N" parameter so that they all give the same speed of response to an increase in hash rate (red bars). To give Digishield a fair chance, I removed the 6-block MTP delay. I had to lower its N value from 17 to 13 blocks to make it as fast as the others. I could have raised the other algo's N value instead, but I wanted a faster response than Digishield normally gives (based on watching hash attacks on Zcash and Hush). Also based on those attacks and attacks on other coins, I make my "test attack" below 3x the basline hashrate (red bars) and last for 30 blocks.

compare1

Then I simulate real hash attacks starting when difficulty accidentally drops 15% below baseline and end when difficulty is 30% above baseline. I used 3x attacks, but I get the same results for a wide range of attacks. The only clear advantage LWMA and EMA have over Digishield is fewer delays after attacks. The combination of the delay and "blocks stolen" metrics closely follows the result given by a root-mean-square of the error between where difficulty is and where it should be (based on the hash rate). LWMA wins on that metric also for a wide range of hash attack profiles.

compare4

I also consider their stability during constant hash rate.

compare3

Here is my spreadsheet for testing algorithms I've spent 9 months devising algorithms, learning from others, and running simulations in it.

compare_hash

Here's Hush with Zcash's Digishield compared to Masari with LWMA. Hush was 10x the market capitalization of Masari when these were done (so it should have been more stable). The beginning of Masari was after it forked to LWMA and attackers were still trying to see if they could profit.

image

image

@zawy12 zawy12 changed the title WWHM difficulty algorithm LWWHM difficulty algorithm Dec 7, 2017
@zawy12 zawy12 changed the title LWWHM difficulty algorithm LW-WHM difficulty algorithm Dec 7, 2017
@zawy12 zawy12 changed the title LW-WHM difficulty algorithm WHM difficulty algorithm Dec 8, 2017
@zawy12 zawy12 changed the title WHM difficulty algorithm TWHM difficulty algorithm Jan 9, 2018
@zawy12 zawy12 changed the title TWHM difficulty algorithm WHM difficulty algorithm Jan 9, 2018
@zawy12 zawy12 changed the title WHM difficulty algorithm LWMA (WHM) difficulty algorithm Jan 11, 2018
@h4x3rotab
Copy link

BTCGPU/BTCGPU@a3c8d1a

I'm on boarding :)

@h4x3rotab
Copy link

h4x3rotab commented Feb 24, 2018

Here is the Python implementation of LWMA algo in Bitcoin Gold:

def BTG_LWMA(height, timestamp, target):
    # T=<target solvetime>

    T = 600

    # height -1 = most recently solved block number
    # target  = 1/difficulty/2^x where x is leading zeros in coin's max_target, I believe
    # Recommended N:

    N = 45 # int(45*(600/T)^0.3))

    # To get a more accurate solvetime to within +/- ~0.2%, use an adjustment factor.
    # This technique has been shown to be accurate in 4 coins.
    # In a formula:
# [edit by zawy: since he's using target method, adjust should be 0.998. This was my mistake. ]
    # adjust = 0.9989^(500/N)  
    # k = (N+1)/2 * adjust * T 
    k = 13632
    sumTarget = 0
    t = 0
    j = 0

    # Loop through N most recent blocks.  "< height", not "<=". 
    # height-1 = most recently solved rblock
    for i in range(height - N, height):
        solvetime = timestamp[i] - timestamp[i-1]
        j += 1
        t += solvetime * j
        sumTarget += target[i]

    # Keep t reasonable in case strange solvetimes occurred. 
    if t < N * k // 3:
        t = N * k // 3

    next_target = t * sumTarget // k // N // N
    return next_target

@zawy12 , please note that your original pseudocode has a mistake at the last line:

next_target = t * sumTarget / k

If I understand it correctly, it should be:

next_target = t * sumTarget / (k * N^2)

t is the weighted sum of solve time, which has the same order of T*N*(N+1) / 2; sumTarget is the sum of the target of the last N blocks, which equals to N*avg_target.

Given k is (N+1)/2 * adjust * T, ignoring adjust, which is approximate 1, if we sub the three variables to next_target = t * sumTarget / k, we will get:

next_target = T*N*(N+1) / 2 * N*avg_target / ((N+1)/2 * T) = N^2 * avg_target

Apparently, there's a superfluous factor N^2.

@zawy12
Copy link
Owner Author

zawy12 commented Feb 24, 2018

Thanks for the correction.

CjS77 added a commit to tari-project/tari that referenced this issue Nov 6, 2019
This PR adds in a difficulty adjustment algorithm.

Motivation and Context

We need to adjust the difficulty per block. This PR adds the LWMA
algorithm as per zawy12/difficulty-algorithms#3
CjS77 added a commit to tari-project/tari that referenced this issue Nov 6, 2019
This PR adds in a difficulty adjustment algorithm.

Motivation and Context

We need to adjust the difficulty per block. This PR adds the LWMA
algorithm as per zawy12/difficulty-algorithms#3
CjS77 added a commit to tari-project/tari that referenced this issue Nov 6, 2019
Merge pull request #971

We need to adjust the difficulty per block. This PR adds the LWMA
algorithm as per zawy12/difficulty-algorithms#3
@neoncoin-project
Copy link

Can you please make a version for the BTC clones in version 0.8.6? I try but is hard the timestam stuff.

@dpowcore-project
Copy link

what is setting to use for 5m blocks ?

@cryptforall
Copy link

what is setting to use for 5m blocks ?

Is the blockchain already set at 5m and you want to adjust the numbers?

@zawy12
Copy link
Owner Author

zawy12 commented Apr 10, 2024

In contradiction to my previous statements that say 50% of N depends on block time, I'm now saying 200 to 600 is a good value. One reason is because this many sample has only 1/SQRT(N) error in sampling which is 7% to 4% error. The main reason I changed is that I thought for many years that Monero was bad for every coin that tried it's DA because the average time was a full day (too slow). But later I realized most of the extreme oscillation problems were because monero modifies it's simple moving average to have a lag in using solvetimes (it doesn't use the most recent solvetimes). Simple moving averages can also cause oscillations under on-off mining, but not as bad as what monero forks were seeing before changing the difficulty algorithm. So instead of N=60 or 120 for 5-minute blocks, I'm saying 200 to 600 for all block times.

@someunknownman
Copy link

someunknownman commented Apr 13, 2024

In contradiction to my previous statements that say 50% of N depends on block time, I'm now saying 200 to 600 is a good value. One reason is because this many sample has only 1/SQRT(N) error in sampling which is 7% to 4% error. The main reason I changed is that I thought for many years that Monero was bad for every coin that tried it's DA because the average time was a full day (too slow). But later I realized most of the extreme oscillation problems were because monero modifies it's simple moving average to have a lag in using solvetimes (it doesn't use the most recent solvetimes). Simple moving averages can also cause oscillations under on-off mining, but not as bad as what monero forks were seeing before changing the difficulty algorithm. So instead of N=60 or 120 for 5-minute blocks, I'm saying 200 to 600 for all block times.

i think N 576 is ok , but there another qustion if find

FTL (MAX_FUTURE_BLOCK_TIME) needs to be 1/20 of T*N to limit difficulty manipulation to 5% and "Peer Time" (DEFAULT_MAX_TIME_ADJUSTMENT) needs to be 1/2 of that, I get:

static constexpr int64_t MAX_FUTURE_BLOCK_TIME = BLOCK_TIME * N / 20;
static const int64_t DEFAULT_MAX_TIME_ADJUSTMENT = MAX_FUTURE_BLOCK_TIME / 2;
static constexpr int64_t MAX_BLOCK_TIME_GAP = BLOCK_TIME * 12;

that is ok for low times.

but when N 576 and T 300

default is

static constexpr int64_t MAX_FUTURE_BLOCK_TIME = 2 * 60 * 60;  // 2 hours

static constexpr int64_t TIMESTAMP_WINDOW = MAX_FUTURE_BLOCK_TIME; // 2 hours

static constexpr int64_t MAX_BLOCK_TIME_GAP = 90 * 60; // 1.5 hour

static const int64_t DEFAULT_MAX_TIME_ADJUSTMENT = 70 * 60;  // ~1.16 hour.

so if we do T*N = 172800 then 172800 / 20 = 8640 then convert it to hours 8640 / 60 / 60 = 2.4 Hours , more than default :/

and from that all will more than base BTC setting , so what keep ot maybe need calculate by another way for N 576 and T 300 ?

@zawy12
Copy link
Owner Author

zawy12 commented Apr 13, 2024

Your N * T is 2 days which means if there is a significant price change in a few hours it could attract a lot of temporary mining to profit at expense of constant miners. In contradiction to my previous reply, you might want a balance between speed of response due to price changes and wanting stability if hashrate is constant. I'd like an N where expected daily stability in difficulty under constant hashrate is equal to the expected daily price change. This approach gives 2 equations that need solving to determine N and T.

1/sqrt(N) = F 
N * T = k 
where F = std deviation of the fractional change in price in a time period
k = time period F is measured

For example, if you want to target 1 day as the relevant time period in which you want to respond to price changes, then k = 3600 * 24. If F is measured from price history data to be 3% in that k time period (F = 0.03 std dev per day), then N = 1,111 from 1st equation & T = 78 seconds from 2nd equation. Using the equations conversely, you are using T=300 and N=576 which is like saying you have a good balance between stability and price changes if price typically changes 4.2% in 2 days. If it doesn't change that much, you could argue for a larger N, being aware that this will cause 2^(-15) * 1152 = 3.5% error in difficulty at times when nBit is near the "cusp" of changing (this requires understanding nBit's compression scheme to explain). It's 1.75% error at times with N = 576.

@zawy12
Copy link
Owner Author

zawy12 commented Apr 13, 2024

I should mention that ideally your limits on timestamps (future time and past time) would be the highest accuracy your miners can be expected to maintain, provided it's larger than your "typically longest" propagation delays. Let's say blocks and validation of them can easily propagate under normal network conditions in 2 seconds. Then if they can keep clocks with 2.1 second accuracy, technically they should, and they should reject blocks who's timestamps are more than 4.2+2 seconds in the past or 4.2 seconds in the future. But there can be abnormal network delays which which cause blocks to be rejected under this rule, so you have to let PoW override the rule to repair the "breaking of synchony" that occurred. The rule could be "ignore blocks that break with rule for 300 seconds, then let PoW decide which is the most-work chain." This seems a lot of work for nothing but it's the proper way according to Lamport's 1978 Clock's paper. The only benefits I've noticed is that it makes selfish mining impossible because they have to assign a timestamp before they know when to release the block and it fixes a complaint people have had with "real time targeting". I described it in detail in a recent "issue" that included selfish mining.

@someunknownman
Copy link

someunknownman commented Apr 13, 2024

so what setting you can recomend if not go to "real time" mining for prevent any possible issue , about hashrate i not really scare as there dual pow ( not multi pow ) 2 checks - one Nbits = 2 algos. Yespower + argon2id with salt over sha512 with tricky RAM floating.

uint256 CBlockHeader::GetHash() const
{
    return (CHashWriter{PROTOCOL_VERSION} << *this).GetHash();
}

/* Yespower */
uint256 CBlockHeader::GetYespowerPoWHash() const
{
    static const yespower_params_t yespower_1_0_dpowcoin = {
        .version = YESPOWER_1_0,
        .N = 2048,
        .r = 8,
        .pers = (const uint8_t *)"One POW? Why not two? 14/04/2024",
        .perslen = 32
    };
    uint256 hash;
    CDataStream ss(SER_NETWORK, PROTOCOL_VERSION);
    ss << *this;
    if (yespower_tls((const uint8_t *)&ss[0], ss.size(), &yespower_1_0_dpowcoin, (yespower_binary_t *)&hash)) {
        tfm::format(std::cerr, "Error: CBlockHeader::GetYespowerPoWHash(): failed to compute PoW hash (out of memory?)\n");
        exit(1);
    }
    return hash;
}

// CBlockHeader::GetArgon2idPoWHash() instance
// -> Serialize Block Header using CDataStream
// -> Compute SHA-512 hash of serialized data (Two Rounds)
// -> Use the computed hash as the salt for argon2id_hash_raw function for the first round
// -> Call argon2id_hash_raw function for the first round using the serialized data as password and SHA-512 hash as salt
// -> Use the hash obtained from the first round as the salt for the second round
// -> Call argon2id_hash_raw function for the second round using the serialized data as password and the hash from the first round as salt
// -> Return the hash computed in the second round (hash2)

uint256 CBlockHeader::GetArgon2idPoWHash() const
{
    uint256 hash;
    uint256 hash2;
    CDataStream ss(SER_NETWORK, PROTOCOL_VERSION);
    ss << *this;
    
    // Hashing the data using SHA-512 (two rounds)
    std::vector<unsigned char> salt_sha512(CSHA512::OUTPUT_SIZE);
    CSHA512 sha512;
    sha512.Write((unsigned char*)&ss[0], ss.size()).Finalize(salt_sha512.data());
    sha512.Reset().Write(salt_sha512.data(), salt_sha512.size()).Finalize(salt_sha512.data());
    
    // Preparing data for hashing
    const void* pwd = &ss[0];
    size_t pwdlen = ss.size();
    const void* salt = salt_sha512.data();
    size_t saltlen = salt_sha512.size();
    
    // Calling the argon2id_hash_raw function for the first round
    int rc = argon2id_hash_raw(2, 4096, 2, pwd, pwdlen, salt, saltlen, &hash, 32);
    if (rc != ARGON2_OK) {
        printf("Error: Failed to compute Argon2id hash for the first round\n");
        exit(1);
    }
    
    // Using the hash from the first round as the salt for the second round
    salt = &hash;
    saltlen = 32;
    
    // Calling the argon2id_hash_raw function for the second round
    rc = argon2id_hash_raw(2, 32768, 2, pwd, pwdlen, salt, saltlen, &hash2, 32);
    if (rc != ARGON2_OK) {
        printf("Error: Failed to compute Argon2id hash for the second round\n");
        exit(1);
    }

    // Return the result of the second round of Argon2id
    return hash2;
}

static bool CheckBlockHeader(const CBlockHeader& block, BlockValidationState& state, const Consensus::Params& consensusParams, bool fCheckPOW = true)
{
    // Check proof of work's matches claimed amount (dual pow logic)
    bool powResult1 = fCheckPOW ? CheckProofOfWork(block.GetYespowerPoWHash(), block.nBits, consensusParams) : true;
    bool powResult2 = fCheckPOW ? CheckProofOfWork(block.GetArgon2idPoWHash(), block.nBits, consensusParams) : true;

    // Сhecking if both POW's are valid
    if (!powResult1 || !powResult2) {
        return state.Invalid(BlockValidationResult::BLOCK_INVALID_HEADER, "high-hash", "proof of work's failed");
    }

    return true;
}

@zawy12
Copy link
Owner Author

zawy12 commented Apr 13, 2024

People were using N=60 with T=60 or 120, so 5% of N * T for FTL was 180 seconds, so it needed to be a lot less than 2 hours. Your FTL = 2.4 hours is OK, but there's no reason not to make it a lot shorter. Every miner should be able to keep their clock within 5 minutes of UTC, so you could do that. A couple of coins used FTL = 15 seconds and said they didn't have any problems. I just think 2 hours is way too long and serves no purpose. I never saw a legitimate explanation for it in BTC. I would use 1 minute just to make miners keep an accurate clock.

Keep in mind peer time should be removed in accordance with Lamport's 1982 Byzantine paper (and earlier research) as I repeated in the LWMA code and described in my "Timestamp Attacks" issue. But if peer time is kept, then the limit for "revert to local time" (miner's UTC time that he knows is correct) should be reduced form 70 minutes to 1/2 of FTL.

I haven't understood what you're doing with 2 PoWs. Each "hash" is actually a sequence of 2 hashes? Please respond in issue #79 you created instead of this thread.

@cryptforall
Copy link

cryptforall commented Apr 14, 2024

so what setting you can recomend if not go to "real time" mining for prevent any possible issue , about hashrate i not really scare as there dual pow ( not multi pow ) 2 checks - one Nbits = 2 algos. Yespower + argon2id with salt over sha512 with tricky RAM floating.

uint256 CBlockHeader::GetHash() const
{
    return (CHashWriter{PROTOCOL_VERSION} << *this).GetHash();
}

/* Yespower */
uint256 CBlockHeader::GetYespowerPoWHash() const
{
    static const yespower_params_t yespower_1_0_dpowcoin = {
        .version = YESPOWER_1_0,
        .N = 2048,
        .r = 8,
        .pers = (const uint8_t *)"One POW? Why not two? 14/04/2024",
        .perslen = 32
    };
    uint256 hash;
    CDataStream ss(SER_NETWORK, PROTOCOL_VERSION);
    ss << *this;
    if (yespower_tls((const uint8_t *)&ss[0], ss.size(), &yespower_1_0_dpowcoin, (yespower_binary_t *)&hash)) {
        tfm::format(std::cerr, "Error: CBlockHeader::GetYespowerPoWHash(): failed to compute PoW hash (out of memory?)\n");
        exit(1);
    }
    return hash;
}

// CBlockHeader::GetArgon2idPoWHash() instance
// -> Serialize Block Header using CDataStream
// -> Compute SHA-512 hash of serialized data (Two Rounds)
// -> Use the computed hash as the salt for argon2id_hash_raw function for the first round
// -> Call argon2id_hash_raw function for the first round using the serialized data as password and SHA-512 hash as salt
// -> Use the hash obtained from the first round as the salt for the second round
// -> Call argon2id_hash_raw function for the second round using the serialized data as password and the hash from the first round as salt
// -> Return the hash computed in the second round (hash2)

uint256 CBlockHeader::GetArgon2idPoWHash() const
{
    uint256 hash;
    uint256 hash2;
    CDataStream ss(SER_NETWORK, PROTOCOL_VERSION);
    ss << *this;
    
    // Hashing the data using SHA-512 (two rounds)
    std::vector<unsigned char> salt_sha512(CSHA512::OUTPUT_SIZE);
    CSHA512 sha512;
    sha512.Write((unsigned char*)&ss[0], ss.size()).Finalize(salt_sha512.data());
    sha512.Reset().Write(salt_sha512.data(), salt_sha512.size()).Finalize(salt_sha512.data());
    
    // Preparing data for hashing
    const void* pwd = &ss[0];
    size_t pwdlen = ss.size();
    const void* salt = salt_sha512.data();
    size_t saltlen = salt_sha512.size();
    
    // Calling the argon2id_hash_raw function for the first round
    int rc = argon2id_hash_raw(2, 4096, 2, pwd, pwdlen, salt, saltlen, &hash, 32);
    if (rc != ARGON2_OK) {
        printf("Error: Failed to compute Argon2id hash for the first round\n");
        exit(1);
    }
    
    // Using the hash from the first round as the salt for the second round
    salt = &hash;
    saltlen = 32;
    
    // Calling the argon2id_hash_raw function for the second round
    rc = argon2id_hash_raw(2, 32768, 2, pwd, pwdlen, salt, saltlen, &hash2, 32);
    if (rc != ARGON2_OK) {
        printf("Error: Failed to compute Argon2id hash for the second round\n");
        exit(1);
    }

    // Return the result of the second round of Argon2id
    return hash2;
}
static bool CheckBlockHeader(const CBlockHeader& block, BlockValidationState& state, const Consensus::Params& consensusParams, bool fCheckPOW = true)
{
   // Check proof of work's matches claimed amount (dual pow logic)
   bool powResult1 = fCheckPOW ? CheckProofOfWork(block.GetYespowerPoWHash(), block.nBits, consensusParams) : true;
   bool powResult2 = fCheckPOW ? CheckProofOfWork(block.GetArgon2idPoWHash(), block.nBits, consensusParams) : true;

   // Сhecking if both POW's are valid
   if (!powResult1 || !powResult2) {
       return state.Invalid(BlockValidationResult::BLOCK_INVALID_HEADER, "high-hash", "proof of work's failed");
   }

   return true;
}

Bellcoin uses Yespower and LWMA. How will you use two mining algorithms? https://github.com/bellcoin-org/bellcoin/tree/master/src. I don't believe the PoW mining algorithms you've selected are ASIC-resistant. Sorry Grammarly *

@someunknownman
Copy link

someunknownman commented Apr 14, 2024

so what setting you can recomend if not go to "real time" mining for prevent any possible issue , about hashrate i not really scare as there dual pow ( not multi pow ) 2 checks - one Nbits = 2 algos. Yespower + argon2id with salt over sha512 with tricky RAM floating.

uint256 CBlockHeader::GetHash() const
{
    return (CHashWriter{PROTOCOL_VERSION} << *this).GetHash();
}

/* Yespower */
uint256 CBlockHeader::GetYespowerPoWHash() const
{
    static const yespower_params_t yespower_1_0_dpowcoin = {
        .version = YESPOWER_1_0,
        .N = 2048,
        .r = 8,
        .pers = (const uint8_t *)"One POW? Why not two? 14/04/2024",
        .perslen = 32
    };
    uint256 hash;
    CDataStream ss(SER_NETWORK, PROTOCOL_VERSION);
    ss << *this;
    if (yespower_tls((const uint8_t *)&ss[0], ss.size(), &yespower_1_0_dpowcoin, (yespower_binary_t *)&hash)) {
        tfm::format(std::cerr, "Error: CBlockHeader::GetYespowerPoWHash(): failed to compute PoW hash (out of memory?)\n");
        exit(1);
    }
    return hash;
}

// CBlockHeader::GetArgon2idPoWHash() instance
// -> Serialize Block Header using CDataStream
// -> Compute SHA-512 hash of serialized data (Two Rounds)
// -> Use the computed hash as the salt for argon2id_hash_raw function for the first round
// -> Call argon2id_hash_raw function for the first round using the serialized data as password and SHA-512 hash as salt
// -> Use the hash obtained from the first round as the salt for the second round
// -> Call argon2id_hash_raw function for the second round using the serialized data as password and the hash from the first round as salt
// -> Return the hash computed in the second round (hash2)

uint256 CBlockHeader::GetArgon2idPoWHash() const
{
    uint256 hash;
    uint256 hash2;
    CDataStream ss(SER_NETWORK, PROTOCOL_VERSION);
    ss << *this;
    
    // Hashing the data using SHA-512 (two rounds)
    std::vector<unsigned char> salt_sha512(CSHA512::OUTPUT_SIZE);
    CSHA512 sha512;
    sha512.Write((unsigned char*)&ss[0], ss.size()).Finalize(salt_sha512.data());
    sha512.Reset().Write(salt_sha512.data(), salt_sha512.size()).Finalize(salt_sha512.data());
    
    // Preparing data for hashing
    const void* pwd = &ss[0];
    size_t pwdlen = ss.size();
    const void* salt = salt_sha512.data();
    size_t saltlen = salt_sha512.size();
    
    // Calling the argon2id_hash_raw function for the first round
    int rc = argon2id_hash_raw(2, 4096, 2, pwd, pwdlen, salt, saltlen, &hash, 32);
    if (rc != ARGON2_OK) {
        printf("Error: Failed to compute Argon2id hash for the first round\n");
        exit(1);
    }
    
    // Using the hash from the first round as the salt for the second round
    salt = &hash;
    saltlen = 32;
    
    // Calling the argon2id_hash_raw function for the second round
    rc = argon2id_hash_raw(2, 32768, 2, pwd, pwdlen, salt, saltlen, &hash2, 32);
    if (rc != ARGON2_OK) {
        printf("Error: Failed to compute Argon2id hash for the second round\n");
        exit(1);
    }

    // Return the result of the second round of Argon2id
    return hash2;
}
static bool CheckBlockHeader(const CBlockHeader& block, BlockValidationState& state, const Consensus::Params& consensusParams, bool fCheckPOW = true)
{
   // Check proof of work's matches claimed amount (dual pow logic)
   bool powResult1 = fCheckPOW ? CheckProofOfWork(block.GetYespowerPoWHash(), block.nBits, consensusParams) : true;
   bool powResult2 = fCheckPOW ? CheckProofOfWork(block.GetArgon2idPoWHash(), block.nBits, consensusParams) : true;

   // Сhecking if both POW's are valid
   if (!powResult1 || !powResult2) {
       return state.Invalid(BlockValidationResult::BLOCK_INVALID_HEADER, "high-hash", "proof of work's failed");
   }

   return true;
}

Bellcoin uses Yespower and LWMA. How will you use two mining algorithms? https://github.com/bellcoin-org/bellcoin/tree/master/src. I don't believe the PoW mining algorithms you've selected are ASIC-resistant. Sorry Grammarly *

not exist any asic resistance algos for all algos possible create asic , all depend how much money can be spendet to developt asic.

Two proofs of work (POWs) are utilized for each block validation process. Initially, the block undergoes verification using the Yespower algorithm. Subsequently, the same block is subjected to two rounds of SHA512 as salt, followed by two rounds of Argon2id. The block is deemed valid only if it passes both proof of work validations simultaneously.

For the Yespower proof of work, the function GetYespowerPowHash() computes the hash. This function serializes the block data and utilizes the Yespower algorithm for hash computation.

For the Argon2id proof of work, the function GetArgon2idPoWHash() is employed. This function serializes the block data and then performs two rounds of SHA512 hashing with a salt . Following this, it conducts two rounds of Argon2id hashing. The resulting hash from the second round is returned.

To verify block headers, the function CheckBlockHeader() is utilized. It evaluates both proofs of work for the block. If either proof of work fails, the block is considered invalid.

For lightweight Simplified Payment Verification (SPV) wallets, only one of the proof of works can be utilized for verification.

@cryptforall
Copy link

cryptforall commented Apr 14, 2024

so what setting you can recomend if not go to "real time" mining for prevent any possible issue , about hashrate i not really scare as there dual pow ( not multi pow ) 2 checks - one Nbits = 2 algos. Yespower + argon2id with salt over sha512 with tricky RAM floating.

uint256 CBlockHeader::GetHash() const
{
    return (CHashWriter{PROTOCOL_VERSION} << *this).GetHash();
}

/* Yespower */
uint256 CBlockHeader::GetYespowerPoWHash() const
{
    static const yespower_params_t yespower_1_0_dpowcoin = {
        .version = YESPOWER_1_0,
        .N = 2048,
        .r = 8,
        .pers = (const uint8_t *)"One POW? Why not two? 14/04/2024",
        .perslen = 32
    };
    uint256 hash;
    CDataStream ss(SER_NETWORK, PROTOCOL_VERSION);
    ss << *this;
    if (yespower_tls((const uint8_t *)&ss[0], ss.size(), &yespower_1_0_dpowcoin, (yespower_binary_t *)&hash)) {
        tfm::format(std::cerr, "Error: CBlockHeader::GetYespowerPoWHash(): failed to compute PoW hash (out of memory?)\n");
        exit(1);
    }
    return hash;
}

// CBlockHeader::GetArgon2idPoWHash() instance
// -> Serialize Block Header using CDataStream
// -> Compute SHA-512 hash of serialized data (Two Rounds)
// -> Use the computed hash as the salt for argon2id_hash_raw function for the first round
// -> Call argon2id_hash_raw function for the first round using the serialized data as password and SHA-512 hash as salt
// -> Use the hash obtained from the first round as the salt for the second round
// -> Call argon2id_hash_raw function for the second round using the serialized data as password and the hash from the first round as salt
// -> Return the hash computed in the second round (hash2)

uint256 CBlockHeader::GetArgon2idPoWHash() const
{
    uint256 hash;
    uint256 hash2;
    CDataStream ss(SER_NETWORK, PROTOCOL_VERSION);
    ss << *this;
    
    // Hashing the data using SHA-512 (two rounds)
    std::vector<unsigned char> salt_sha512(CSHA512::OUTPUT_SIZE);
    CSHA512 sha512;
    sha512.Write((unsigned char*)&ss[0], ss.size()).Finalize(salt_sha512.data());
    sha512.Reset().Write(salt_sha512.data(), salt_sha512.size()).Finalize(salt_sha512.data());
    
    // Preparing data for hashing
    const void* pwd = &ss[0];
    size_t pwdlen = ss.size();
    const void* salt = salt_sha512.data();
    size_t saltlen = salt_sha512.size();
    
    // Calling the argon2id_hash_raw function for the first round
    int rc = argon2id_hash_raw(2, 4096, 2, pwd, pwdlen, salt, saltlen, &hash, 32);
    if (rc != ARGON2_OK) {
        printf("Error: Failed to compute Argon2id hash for the first round\n");
        exit(1);
    }
    
    // Using the hash from the first round as the salt for the second round
    salt = &hash;
    saltlen = 32;
    
    // Calling the argon2id_hash_raw function for the second round
    rc = argon2id_hash_raw(2, 32768, 2, pwd, pwdlen, salt, saltlen, &hash2, 32);
    if (rc != ARGON2_OK) {
        printf("Error: Failed to compute Argon2id hash for the second round\n");
        exit(1);
    }

    // Return the result of the second round of Argon2id
    return hash2;
}
static bool CheckBlockHeader(const CBlockHeader& block, BlockValidationState& state, const Consensus::Params& consensusParams, bool fCheckPOW = true)
{
   // Check proof of work's matches claimed amount (dual pow logic)
   bool powResult1 = fCheckPOW ? CheckProofOfWork(block.GetYespowerPoWHash(), block.nBits, consensusParams) : true;
   bool powResult2 = fCheckPOW ? CheckProofOfWork(block.GetArgon2idPoWHash(), block.nBits, consensusParams) : true;

   // Сhecking if both POW's are valid
   if (!powResult1 || !powResult2) {
       return state.Invalid(BlockValidationResult::BLOCK_INVALID_HEADER, "high-hash", "proof of work's failed");
   }

   return true;
}

Bellcoin uses Yespower and LWMA. How will you use two mining algorithms? https://github.com/bellcoin-org/bellcoin/tree/master/src. I don't believe the PoW mining algorithms you've selected are ASIC-resistant. Sorry Grammarly *

not exist any asic resistance algos for all algos possible create asic , all depend how much money can be spendet to developt asic.

Two proofs of work (POWs) are utilized for each block validation process. Initially, the block undergoes verification using the Yespower algorithm. Subsequently, the same block is subjected to two rounds of SHA512 as salt, followed by two rounds of Argon2id. The block is deemed valid only if it passes both proof of work validations simultaneously.

For the Yespower proof of work, the function GetYespowerPowHash() computes the hash. This function serializes the block data and utilizes the Yespower algorithm for hash computation.

For the Argon2id proof of work, the function GetArgon2idPoWHash() is employed. This function serializes the block data and then performs two rounds of SHA512 hashing with a salt . Following this, it conducts two rounds of Argon2id hashing. The resulting hash from the second round is returned.

To verify block headers, the function CheckBlockHeader() is utilized. It evaluates both proofs of work for the block. If either proof of work fails, the block is considered invalid.

For lightweight Simplified Payment Verification (SPV) wallets, only one of the proof of works can be utilized for verification.

X16-17-+ were asic res unless that was incorrect in 2019
Would that not increase the power required to process and kill mining equipment faster? If one POW is valid does the miner stay stuck in a loop on the 2nd pow if invalid or does it attempt to remine* the first? What mining software will you use? Is this for testing or research? Got to think about the miners. If you don't care about asic, you won't get the miners.

@someunknownman
Copy link

someunknownman commented Apr 14, 2024

project is paned non premine , there no mining software / exist pools , algos only for POW , heders sha256 native (LTC like for speed up sync ).

Target most be valid at the same time for 2 POW's - this is not multi POW it is dual POW and this is more experemental solution , as it dual pow and 1 target is valid , it make possible at future even remove one of algo at BIP with no keep any "trash code".

@cryptforall
Copy link

project is paned non premine , there no mining software / exist pools , algos only for POW , heders sha256 native (LTC like for speed up sync ).

Target most be valid at the same time for 2 POW's - this is not multi POW it is dual POW and this is more experemental solution , as it dual pow and 1 target is valid , it make possible at future even remove one of algo at BIP with no keep any "trash code".

Interesting. Why not just trust one pow? What’s is the benefit and does it mitigate anything?

@someunknownman
Copy link

someunknownman commented Apr 14, 2024

project is paned non premine , there no mining software / exist pools , algos only for POW , heders sha256 native (LTC like for speed up sync ).
Target most be valid at the same time for 2 POW's - this is not multi POW it is dual POW and this is more experemental solution , as it dual pow and 1 target is valid , it make possible at future even remove one of algo at BIP with no keep any "trash code".

Interesting. Why not just trust one pow? What’s is the benefit and does it mitigate anything?

is one pow will have any problems or potential CVE we can drop it as target will be still valid and remove it clearly from code for not keep "forking code" , yespower is good for CPU and make prevent GPU and asic mining at exist software.
argon2id is good for use at all platform ( exist js , go , rust , pyhon) libs , for yespower only exist for node and python ( simple integration )

this is more of an experiment + it was very difficult to find the best algorithm setting to maintain algorithms with the same speed difference - hashrate.

for prevent situation - mining 1 algo to find targets and then try push it to 2 algo for speed up.

@cryptforall
Copy link

project is paned non premine , there no mining software / exist pools , algos only for POW , heders sha256 native (LTC like for speed up sync ).
Target most be valid at the same time for 2 POW's - this is not multi POW it is dual POW and this is more experemental solution , as it dual pow and 1 target is valid , it make possible at future even remove one of algo at BIP with no keep any "trash code".

Interesting. Why not just trust one pow? What’s is the benefit and does it mitigate anything?

is one pow will have any problems or potential CVE we can drop it as target will be still valid and remove it clearly from code for not keep "forking code" , yespower is good for CPU and make prevent GPU and asic mining at exist software. argon2id is good for use at all platform ( exist js , go , rust , pyhon) libs , for yespower only exist for node and python ( simple integration )

this is more of an experiment + it was very difficult to find the best algorithm setting to maintain algorithms with the same speed difference - hashrate.

for prevent situation - mining 1 algo to find targets and then try push it to 2 algo for speed up.

Since it’s post incident, just change the algo at block # and do a wallet update. You will be doing wallet releases periodically for the first few years. Save your self the headache. (Technically a fork but it’s the same because you have to change the code to remove)

@someunknownman
Copy link

someunknownman commented Apr 14, 2024

and argon2id i use not native, there is 2 memory locks , what is problematic at GPU or FPGA , first for 1 round lock memory 4mb , then for next round you need again call memory 32mb "lock > free > lock"

or simple think about it why we use at BTC 2 round of sha256 why not one ? the same .

target - block most have the same or more lead 000... hash will be diferent ,but they will give valid targets at 2 algos for the same data ( "target colision" )

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

6 participants