You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
WTEMA is the only difficulty algorithm people should use
Every PoW needs to just use WTEMA. It's simple and every good algorithm gives almost identical results. Relative ASERT is really an EMA algorithm. "EMA" means
target = prior_target * e^(error/N)
where N = smoothing_constant aka filter, and for relative ASERT, error = t/T-1.
t = prior block solvetime
T = desired average block time.
WTEMA is an "infinite impulse response filters of 1st order". It's ideal in many situations. It very closely approximates relative ASERT by replacing e^x with 1+x which is very accurate for small x. Absolute ASERT is slightly better than relative ASERT because it prevents error caused by e^x function being imperfectly estimated and nBits imprecision. The error is linearly worse with larger N.
This means WTEMA is:
target = prior_target * ( 1 + t/T/N - 1/N )
This can be expanded out to allow integer math (no decimals) to be accurate because targets are so large, but make sure an overflow in target can't occur.
To code ASERT, you need to implement a b-spline approximation of e^x with integer math because compilers and hardware may have a small difference in the e^x calculation that can cause forks due to validators not seeing exactly the same thing. But this adds a lot of code so I prefer WTEMA because it's so much simpler with only a slightly larger error than absolute ASERT. Error is mostly consistent and predictable with modelling and can be reduced by changing the target block time such as using T=594 instead of 600 (bitcoin) if the error in modelling makes T 1% higher (e.g. if average solvetime = 606 in modelling).
Time and Timestamp rules
To summarize this section, to prevent all the attacks related to difficulty algorithms, time, and timestamps, Lamport's ancient distributed consensus requirements on time and timestamps should be applied to Nakamoto consensus. Nakamoto consensus is a distributed consensus mechanism and Lamport's derivations were independent of the algorithm. See "Timestamp Attacks" for a history of all the difficulty algorithm, time, and timestamp attacks on Nakamoto consensus due to Satoshi et al not following Lamport's requirements. These requirements are:
Peer time should not be used, only independent local time.
Monotonic timestamps should always be used, not the median of the past 11 timestamps (MTP).
Future Time Limit (FTL) should be reduced to a fraction of block time
Timespan limits and other methods of changing what a correct difficulty algorithm calculates should not be used.
Peer time should not be used. Independent local times being correct without asking anyone is the only correct way to validate Nakamoto consensus because the clocks in distributed consensus mechanisms must be more secure than the consensus mechanism. The proofs go back to and before the original Byzantine paper, when distributed consensus was called interactive consistency. Monotonic timestamps should always be used. MTP is an ugly and harmful fix to not following sequential timestamps in every block and should be replaced in all code with monotonicity. The monotonicity rule for miners should be
A newly-seen ("tip") block's timestamp is valid if 1) the timestamp is less than my local time plus the future time limit (FTL) and 2) it's at least 1 second after its parent block's timestamp. Otherwise reject the block permanently unless in 1) there is reason to believe local time had an error. When mining, the timestamp in the header is local time or 1 second plus parent block, whichever is greater, but do not release it unless and until that timestamp is less then my local time plus the FTL.
The FTL should be coded as a function of N blocks and T block time where N*T is the "difficulty averaging window" in seconds. FTL should be less than say 5% of N*T to have less than 5% artificial reduction in difficulty if a timestamp is set forward in time to the limit of the FTL. This FTL rule is actually way too lenient as described in the selfish mining section below. If there's a peer time due to devs not understanding the importance of local time in Nakamoto consensus, the allowed peer time error should be coded as FTL/2. BTC's "revert to local time if median of peer time is more than 70 minutes different from local time" approximately does this (its FTL is 120 minutes). BTC codes the 70 minutes as a constant instead of making it a function of FTL like FTL/2 which will allow a Sybil or eclipse attack if FTL is reduced below 70 minutes without removing peer time or reducing the time in the revert rule.
There should not be any timespan limits. When combined with non-monotonic timestamps, it allows an unlimited number of blocks to be mined in 2.8x the difficulty averaging window if the attacker has more than 50% of hashrate. Almost all coins have this exploit, and it works to a lesser extent even if there's monotonicity. I discovered this attack and within a few months of describing it, 5 coins had to fork to recover thousands of blocks. It's the oldest open issue on Litecoin.
Not following these rules has resulted in attacks on several coins where miners had alternate sources of mining income, which is itself a violation of Nakamoto security. BTC is the most secure in having the "non-repurpose-ability" security requirement and as a result it has had a lot of protection despite not correctly following distributed consensus requirements (in addition to having the worst difficulty algorithm). Most of the successful attacks on alts related to difficulty algorithms originate from BTC code and ideas in how it handles timestamps in ways that violate Lamport's 1978 clock's paper in the context of Nakamoto consensus.
The ancient requirements for timestamps on decentralized consensus in Leslie Lamport's 1978 clock's paper are independent of the consensus mechanism. Selfish mining can be prevented by enforcing them which means tightening BTC's clock and timestamp requirements. To get 51% security the known minimum requirements for all distributed consensus mechanisms are: Digital signatures on messages (which are provided to the blocks in the form of the costliness of winning hashes), independent clocks (peer time should be removed), monotonic timestamps, timestamps that are much more accurate than the length of a consensus round (block time), and clock ticks plus their allowable error to be larger than propagation delays (i.e. timestamps are assumed to have more potential error than propagation delays).
In short, enforcing accurate timestamps stops selfish mining because the attacker has to assign a timestamp and find a winning hash before knowing when he'll need to release the block, so honest miners with accurate local time can detect and temporarily reject blocks with bad timestamps, giving honest miners a chance to find blocks with honest timestamps.
A specific scheme to follow the above requirements:
The Future Time Limit (FTL) should be reduced from 2 hours to the maximum-expected "typical" propagation delay (which is roughly 2 seconds, median delay is about 250 ms) plus 2x the max allowed error in well-running local clocks (I assume 2 * 3 seconds) and a "PTL" (past time limit) of 2x the max allowed error before local time or 1 second after the parent block (to enforce monotonicity), whichever is greater. If a timestamp is not within these limits, the block should be ignored for the 1 block time. If after that timeout it still has the most work, it's accepted, provided the timestamp is not still greater than local time plus FTL and it is monotonic. In other words, it can come out of timeout if there appears to be a network delay and it's only breaking the PTL rule which can occur if there's a delay.
This scheme increases the chances of a chain split if there are network delays, but the timeout rule allows PoW to eventually determine the correct tip.
Value of N, removing parameters from code
Myself and others have spent a lot of time trying to decide on the value of N, the number of blocks to use in "difficulty averaging". It's not really crucial except you want a larger value than I have used in the past but not so large that it's not responding to price changes in the coin, and definitely not so small that the difficulty changes a lot due to the random nature of the solvetimes. The StdDev of difficulty for ASERT & WTEMA (when hashrate is constant) is about 1/SQRT(2*N) so that if N=50 blocks, you'll have 10% StdDev in difficulty. My current thinking is to set it equal to 1 or 2 day's worth of blocks. This sounds flippant but it's serious and the result of modelling and experience. This makes N a function of T. In everything related to blockchains that seek to be decentralized and objective, it's really good to prevent developers from needing to choose any constants. The science and math of consensus security should choose everything. Even T could be a function of something but may still require some ultimate constant. For example, you may want to change T in a DAG blockchain if you want the DAG to maintain a specific width (the subsequent constant) or have a consistent confirmation time as propagation delays change.
Real Time Targeting
One of the complaints of real-time targeting (RTT) has been that it allows the miner who is assigning the timestamp to reduce his own difficulty. Notice that the selfish mining fix also greatly reduces this effect, so the barrier to entry to RTT should be lower.
The text was updated successfully, but these errors were encountered:
This is a summary of issues that keep coming up.
WTEMA is the only difficulty algorithm people should use
Every PoW needs to just use WTEMA. It's simple and every good algorithm gives almost identical results. Relative ASERT is really an EMA algorithm. "EMA" means
target = prior_target * e^(error/N)
where N = smoothing_constant aka filter, and for relative ASERT, error = t/T-1.
t = prior block solvetime
T = desired average block time.
WTEMA is an "infinite impulse response filters of 1st order". It's ideal in many situations. It very closely approximates relative ASERT by replacing e^x with 1+x which is very accurate for small x. Absolute ASERT is slightly better than relative ASERT because it prevents error caused by e^x function being imperfectly estimated and nBits imprecision. The error is linearly worse with larger N.
This means WTEMA is:
target = prior_target * ( 1 + t/T/N - 1/N )
This can be expanded out to allow integer math (no decimals) to be accurate because targets are so large, but make sure an overflow in target can't occur.
To code ASERT, you need to implement a b-spline approximation of e^x with integer math because compilers and hardware may have a small difference in the e^x calculation that can cause forks due to validators not seeing exactly the same thing. But this adds a lot of code so I prefer WTEMA because it's so much simpler with only a slightly larger error than absolute ASERT. Error is mostly consistent and predictable with modelling and can be reduced by changing the target block time such as using T=594 instead of 600 (bitcoin) if the error in modelling makes T 1% higher (e.g. if average solvetime = 606 in modelling).
Time and Timestamp rules
To summarize this section, to prevent all the attacks related to difficulty algorithms, time, and timestamps, Lamport's ancient distributed consensus requirements on time and timestamps should be applied to Nakamoto consensus. Nakamoto consensus is a distributed consensus mechanism and Lamport's derivations were independent of the algorithm. See "Timestamp Attacks" for a history of all the difficulty algorithm, time, and timestamp attacks on Nakamoto consensus due to Satoshi et al not following Lamport's requirements. These requirements are:
Peer time should not be used. Independent local times being correct without asking anyone is the only correct way to validate Nakamoto consensus because the clocks in distributed consensus mechanisms must be more secure than the consensus mechanism. The proofs go back to and before the original Byzantine paper, when distributed consensus was called interactive consistency. Monotonic timestamps should always be used. MTP is an ugly and harmful fix to not following sequential timestamps in every block and should be replaced in all code with monotonicity. The monotonicity rule for miners should be
The FTL should be coded as a function of N blocks and T block time where N*T is the "difficulty averaging window" in seconds. FTL should be less than say 5% of N*T to have less than 5% artificial reduction in difficulty if a timestamp is set forward in time to the limit of the FTL. This FTL rule is actually way too lenient as described in the selfish mining section below. If there's a peer time due to devs not understanding the importance of local time in Nakamoto consensus, the allowed peer time error should be coded as FTL/2. BTC's "revert to local time if median of peer time is more than 70 minutes different from local time" approximately does this (its FTL is 120 minutes). BTC codes the 70 minutes as a constant instead of making it a function of FTL like FTL/2 which will allow a Sybil or eclipse attack if FTL is reduced below 70 minutes without removing peer time or reducing the time in the revert rule.
There should not be any timespan limits. When combined with non-monotonic timestamps, it allows an unlimited number of blocks to be mined in 2.8x the difficulty averaging window if the attacker has more than 50% of hashrate. Almost all coins have this exploit, and it works to a lesser extent even if there's monotonicity. I discovered this attack and within a few months of describing it, 5 coins had to fork to recover thousands of blocks. It's the oldest open issue on Litecoin.
Not following these rules has resulted in attacks on several coins where miners had alternate sources of mining income, which is itself a violation of Nakamoto security. BTC is the most secure in having the "non-repurpose-ability" security requirement and as a result it has had a lot of protection despite not correctly following distributed consensus requirements (in addition to having the worst difficulty algorithm). Most of the successful attacks on alts related to difficulty algorithms originate from BTC code and ideas in how it handles timestamps in ways that violate Lamport's 1978 clock's paper in the context of Nakamoto consensus.
Selfish Mining Prevention #
The ancient requirements for timestamps on decentralized consensus in Leslie Lamport's 1978 clock's paper are independent of the consensus mechanism. Selfish mining can be prevented by enforcing them which means tightening BTC's clock and timestamp requirements. To get 51% security the known minimum requirements for all distributed consensus mechanisms are: Digital signatures on messages (which are provided to the blocks in the form of the costliness of winning hashes), independent clocks (peer time should be removed), monotonic timestamps, timestamps that are much more accurate than the length of a consensus round (block time), and clock ticks plus their allowable error to be larger than propagation delays (i.e. timestamps are assumed to have more potential error than propagation delays).
In short, enforcing accurate timestamps stops selfish mining because the attacker has to assign a timestamp and find a winning hash before knowing when he'll need to release the block, so honest miners with accurate local time can detect and temporarily reject blocks with bad timestamps, giving honest miners a chance to find blocks with honest timestamps.
A specific scheme to follow the above requirements:
This scheme increases the chances of a chain split if there are network delays, but the timeout rule allows PoW to eventually determine the correct tip.
Value of N, removing parameters from code
Myself and others have spent a lot of time trying to decide on the value of N, the number of blocks to use in "difficulty averaging". It's not really crucial except you want a larger value than I have used in the past but not so large that it's not responding to price changes in the coin, and definitely not so small that the difficulty changes a lot due to the random nature of the solvetimes. The StdDev of difficulty for ASERT & WTEMA (when hashrate is constant) is about 1/SQRT(2*N) so that if N=50 blocks, you'll have 10% StdDev in difficulty. My current thinking is to set it equal to 1 or 2 day's worth of blocks. This sounds flippant but it's serious and the result of modelling and experience. This makes N a function of T. In everything related to blockchains that seek to be decentralized and objective, it's really good to prevent developers from needing to choose any constants. The science and math of consensus security should choose everything. Even T could be a function of something but may still require some ultimate constant. For example, you may want to change T in a DAG blockchain if you want the DAG to maintain a specific width (the subsequent constant) or have a consistent confirmation time as propagation delays change.
Real Time Targeting
One of the complaints of real-time targeting (RTT) has been that it allows the miner who is assigning the timestamp to reduce his own difficulty. Notice that the selfish mining fix also greatly reduces this effect, so the barrier to entry to RTT should be lower.
The text was updated successfully, but these errors were encountered: