-
Notifications
You must be signed in to change notification settings - Fork 24
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add wtema-72 weighted-target exponential moving average #30
Conversation
Add support for a new WTEMA DAA characterized by a single "alpha" parameter and no fixed window size.
How can the right-hand
|
@zawy12 wtema_target is a global variable in the sim. If algos were classes in the sim, we could use a class (as opposed to instance) variable. The target is initialized at the beginning of each simulation run to the difficulty value of the warm-up blocks. In real life it should be initialized with the last target produced by the prior algorithm, or in the case of a new blockchain, the initial target for the chain. |
Sounds awesome, I intend to play with it today |
I think you're letting This is probably the best because it works with integers and you don't need to do anything with solvetimes. The only thing left to do is figure out what the best N is, and as a function of target solvetime, which I've worked on. The "N" = 104 in this one might be ideal for coins with 150 second solvetimes, but for 600 seconds, 50 might be best. What's hard to capture in the metrics for success is the effect of price changes on miner motivation. So without that, large N is going to look better in tests when half of N might be best. The best algorithm so far in a live coin is N=60 with my WHM modification to WT that Masari has implemented. They are solvetime=120 which implies a smaller N is need for 600 seconds. This is really a basic EMA of the solvetimes. Jacob's is a certain type of EMA for rates. One is probably better in theory and practice, but I don't know which. I tried an inverted form Jacob's to do like this, but it didn't work. One hint that Jacob's might be theoretically best is that the solvetime stays perfect. Like all the others, this one is a little too high, with solvetimes about 0.5% too high. If you go to a lower N, it should get a little higher solvetime. I have equations to correct the solvetimes for non-EMA algos when N < 200. If you could multiply by a correction factor of 0.99995, it would correct the 0.5% error. Non-ema algos would need 0.995 (you can even see SMA with N=144 solvetime is a little too high ~0.2% from experiment and BCH live data), but EMA's are different. Like Jacob's, this algo is really susceptible to any "round off" or "carry over" error. I do not know how or if those errors could occur in chains, but if the exact previous It looks really good, and it's nice to see one that acts different than the others. Like the SMA N=144, the statistics are going to look good at the expense of slowness to respond to price changes. That's when the new DAA is having problem. So it's half way between where the DAA is and where I think it should go (like EMA with N=30 or WHM with N=50). A faster version of this might be the best. |
|
The alpha here is 1 - 0.5^(1/72), so it is following the basic EMA: The previous terms are weighted according to the following, but this is just an EMA: Where p=price is out ST/T. I'm glad Tom mentioned the "half life" view of it since 1-alpha in the above means past ST/T ratios get a weighting of Jacob's statement that this N=72 should be compared to his EMA with N=100 is correct. They function about the same. Jacob's has slightly fewer post-"hash attack" delays, but Jacob's does not do as well as this one on my "blocks stolen" metric (blocks obtained by intelligent big miner at lower-than correct difficulty). But they are very close. The main advantage of this is the use of integer math, as coins I'm talking to really want the integer math. Allowing negative solvetimes is also nice. BTW, for small alpha, the following strange thing is true: It should not have a problem with timestamps because in my brief testing it handled negative solvetimes well. But since devs will often not use a singed integer, some will just use ST = 0 if ST < 0 which allows a disastrous exploit. An attacker with 20% of the total hashrate only needs to assign old timestamps at the minus 1 hour limit to make the difficulty drop to 1/2 the correct value in 50 blocks. See method 2 in this article.. The fix, if you can't allow negatives, is to use the basic limitation WT144 and Jacob's EMA are using, as long as you don't use a faster version of this like I would like to see. When it responds faster, a single timestamp set to the forward limit has too big an effect on it. Monero allows 24xT and BTC clones 12xT. The fix for faster responding algorithms is in method 3 of that same article. It needs some care to be symmetrical as there's an exploit if it is not. I will recommend this to the 6 coins about to fork or begin and who are following my recommendations (sumokoin, masari, BTG, masari, and I think new BTCP and ZGLD). Most of them are 4x faster solvetimes so the 104 is probably a pretty good choice for them, but maybe needs to be about 80. For the T=600 coins I'm going to recommend about N=50 with the complicated solvetime protection (method 3) To be nice, everyone coding this algorithm should include a comment "Solvetimes must be allowed to be negative or use [include Neil's simple max time subtraction method as commented out]". To get the average solvetime more accurate, replace 600 as targetSolveTime in the equation with 597.6. Coin's with different solvetimes use 0.996 of their target time. At N=50 the adjustment is about 0.99. |
You can make an EMA block-based ("Each block gets x% less weight than the one that followed it, regardless of how long they took") or time-based ("Each block gets x% less weight for each 10 minutes since it was mined"). Both are sane things to try. I'm arguing that time-based is better here, because it responds to difficulty drops better (faster), and is more mathematically correct for what we're trying to do (maintain a 10-minute avg block time). I actually looked into both pretty closely when I first tried out EMAs for difficulty. |
Jacob's EMA:
[deleted my approximation for degener8's equation because it did not work] When Jacob's N is set to Degenr8's recip_alpha, I'm not sure I can see a difference in the algorithms. |
These charts show the EMAs are indistinguishable. The charts are 4000 blocks. The red is the hashrate being applied, normalized to the difficulty. T=1, D=1, and HR = 1 most of the time. I included my modification of WT-144 for reference (it says N=52 but it's really N=60). Notice how the WHM does not vary from the correct difficulty by more than 25% very often, so miners will not attack it much. Also notice it's a lot faster in case miners do come on strong or leave (last image). To get an EMA that acts like a WHM (or WT), use an N half as big. So N=30 to 50 for these EMAs is what should be used if T=600 seconds. N=104 is fine if solvetime is T=150 seconds. N=104 is like N=208 for the WT and WHM. So it's far from ideal in response rate. Again, the thing consistently being missed is that price changes are not model in the code. The lines in the first image below are "crazy flat". There's no need to make difficulty that smooth. The statistics look good, but that's irrelevant. Again, look at the awesome live data from the super-small Masari coin using N=60 for WHM. Super-small means it has to deal with much higher hashrate changes, and still beats the pants off N=144 SMA. And if the same results are desired in T=600, N has to be even smaller than 60 while the thing being promoted here is effectively N=208. It may be that Masari would do even better with N=100 which implies about N=50 to 80 for WHM which means N=25 to 40 for these EMAs. All these statements were supported by links above. |
The relationship between the 2 EMA's is very reminiscent of how the harmonic mean is related to the arithmetic mean:
I can make Jacob's avg ST more accurate with e^-x in place of the approximation or by extending the power series of the approximation 1 more term, but I can't find a way to use an e^-x to correct the small avg ST error in Degenr8's. Jacob's EMA hints there is a better theoretical version of Degenr8's, or that Jacob's is theoretically. |
Do you run any tests where, say, 90-99% of hashrate instantly disappears for a few days? This is not totally unrealistic - eg, after a HF one or the other branch could be in this state. This is the main situation I can think of where the difference between a "block-based" EMA (like wtema) and a "time-based" EMA (like emai) could be material. |
Good catch. These EMAs are not nearly as good as WT and WHM for raising difficulty during extreme hashrate increase. But they come down as quickly which is really important. Yours with the integer verson "craps out" sometimes if no other timestamp protection is used, as shown below. Red is actual hashrate. The WT looks better than WHM here, but it was the reverse situation in the everyday testing I did. I'm going to reconsider it. The WHM gets a "jump start" on rising which is important, but then it seems more reluctant to reach the final correct value. The WT seems to drop better which is really important. The N for both EMA's N=30 because they had no chance of competing with N=104. Notice the WT and WHM have N=2x30 because that is when they seem most like the EMAs. Your hash rate integerized-EMA is not fooled by bad timestamps that try to raise difficulty, but it is terribly subject to attempts to lower difficulty which is not good (if no other protection is used Degener8's solvetime-EMA rises in both cases, which is a better situation. But it seems timestamp protection is needed in both, especially if smaller N is use. The first hump is with time stamps from 50% miner always assigning at the MTP, 6xT into the past, which causes an honest miner to cause an apparent solvetime of +7xT in the next block. The second solvetime-EMA hump is from 50% miner always assigning +12xT which causes the next apparent solvetime to be -10xT. Neil's timestamp protection might solve the problem in both cases, but as I said above, for the smaller N that T=600 seconds needs, my method 2 in the link above is desperately needed, if miners figure it out. An better alternative to the complexity of method 2 is to allow the negative solvetimes and enforce block_time limits of +6xT and -4xT. This will not degrade performance under extreme conditions, allows solvetime-EMA to be most accurate under bad timestamps, and allows small N. So I would make the algorithm:
I would like to see the code clarified with different variable names. Also, remember there is an error this algorithm as N gets smaller. IDEAL_BLOCK_TIME needs a new value depending on N. 0.5% smaller for N=100 and 2% smaller for N=30. So in initialization, the BT equation below should be used to get the correct average solvetime. The same equation I found for WHM works here.
This is my current equation the best value for N based on the target solvetime: |
Correction: Jacob's is an EMA of "needed difficulty" while Degener8's is an EMA of "needed target". The needed targets are weighted, but "weighted" is part of the definition of EMA. I can't find a way to use an e^x to get the EMA-T to have an exact solvetime like the EMA-D. |
Using the simplified versions above of the EMA-T and EMA-D and setting them equal to each other and solving for t/T gives 1 (where t=solvetime and T=600 seconds). I believe this means that, on average, if they are doing their job of keeping an average t = T, then they are the same equation. [edit: I do not mean to say they are the same equation. The are probably the same equation on average. ] |
I've completed my additions to this in order to select N, correct the avg solvetime, and handle bad timestamps. I will recommend this to Karbowanec, Sumokoin, ZGLD, BTGP (both Platinum and {Private), BTG, and Masari. Masari will push it to Monero / cryptonote for other clones. I think Masari will employ it first which is good for comparison to my WHM version of WT which has done awesome. |
I finally found the idealized equation for EMA-je and EMA-th that shows they are equal within 0.02% for most solvetimes, and equal to within 0.005% on average.
Substitute The
I think the following is the EMA code is best since it keep solvetime accurate for all N. Does it qualify as "integer math"? The timestamp protection is symmetrical and the 10xT limit does not slow down D recovery much even after 1/50th hashrate drops.
The solvetime has to be < -1000xT for the denominator to go zero or negative for N>1. The 1st pair of equations above can be re-written in standard EMA form:
|
I did more testing and digging. I'm at risk of writing a very long post so here's the upshot:
More details - happy to expand on any:
|
I corrected the exact algorithm above to allow integer math. If you use it, then you don't have to pick between any of the EMAs.. But I've decided WHM is the best out of the EMAs, Digishield v3, and WT. These charts show why. But I need to try WT with a higher N than WHM because WHM may act slower than WT which gives it an unfair advantage in the metrics (it's hard to remind myself of not falling for the large N trap when the large N is hidden in the algo). These EMAs are much larger N's than they appear. Charts: stability, and under typical hash attack. The smoother line of WHM implies it's cheating with a larger effective N, so WT might be best. |
If WT has an overflow problem you just scale it down at each step in the loop by N, N*(N+1), or maybe even N*(N+1)*T and multiply by the same factor at the end. |
Which is the implementation that reproduces ema/ema2 using only integer math? Have you verified that for very long block times, it multiplies target by ~minutes/600, like they do? |
@jacob-eliosoff Your points 1 and 4 are what I was referring to in #32 (comment). In your next_bits_ema function, alpha is not constant but a function increasing in the current solvetime. Sorry I'm not sure which version of ema this is -- I suspect it's not ema2. Where is ema2? Your point 2 is how fixed-window WT dealt with negative times, out of necessity because it built up the new target from scratch instead of simply working from the prior target as these ema algos do. |
Ah yeah sorry ema2 isn't in the main branch yet: see #34. It's almost the same as the old ema except it adds the negative block time handling from #2 above. (Whereas emai is my integer-math linear-approx version of ema, but I like wtema better than emai, apart from the negative block time handling.) |
I can't get WT to work better than WHM. WHM is the best. It responds faster than WT and EMAs to hash attacks and yet does not encourage more attacks by going too low on accident. When the MTP delay in Digishield v3 is removed to improve it, it ties WHM in response speed + low random variation, but it has double the delays of WHM. Bad Timestamps Say a 20% hashrate miner comes on and always assigns the max allowed timestamp. This is a best case scenario. Miner's who do timestamp manipulation are 2x to 10x a coin's baseline hashrate if the coin is not the largest coin for that POW. This also assumes they use BTC's 12xT max forward time instead of what Monero clones default to (24xT). Timestamps, if every solve is magically 1xT (so the conclusion will be the average case): The method WT-144 and EMA-je are using give reported solvetimes: So the EMA-je and WT code enables an exploit in timestamps that gives 25% extra profit (the Price/Difficulty ratio) for 20 blocks. The 25% is extreme because BCH sees 3x more hashrate if the P/D ratio falls by 25%, and it's slow response is why it has longer delays than a good algorithm (see below). In Zcash clones with Digishield v3 miners get 20 blocks 3 times a day when the P/D ratio rises above 25% on accident. Selecting N In Zcash digi v3 the price changes don't help. But the random variation is the main problem, combined with using MTP to handle bad timestamps which gives away the first 6 blocks without adjusting. Zcash has 1/2 the delays and "stolen blocks" as BCH. It would do even better if MTP was not being used, which is an option thanks to being able to accept negative solvetimes. Digi v3 with the N=17 is like a SMA, WT, and WHM with N=60. Due to Zcash having 1/4 the blocktime, BCH would need a smaller N to get equal results, although there is a penalty to not having access to as many data points per real time, so it can't do as well unless it has a better algorithm combined with lower N. Summary: N too low suffers from random variation in difficulty and N too high suffers from variation in price (being faster than D). More data to show lower than N=60 is needed for T=600 coins: I got 3 coins to use N=17 with SMA. Their delays and hash attacks have have been 3x worse than BCH's DAA. They have T=150 seconds, which in BCH's T=600 terms is like N=17*(150/600)^0.3 = 11. So N=11 is 3x worse than N=144 because of variation in D instead of variation price. These are the tail ends of a hump who's N=? peak we want, converted to an N for WT and and N for EMA. Based on this I selected my modified version of WT (WHM) to be N=60 for Masari's T=120. It has done great. And its problems from attackers were much worse than the other 2 coins using my N=17. The following first 2 charts compares Zcash and HUSH. They have the same digi v3 and hush is doing much worse ostensibly because it is 1% the size of Zcash. Masari is doing better than both of them and it's a LOT smaller than even HUSH, so it should have the worst problems of all, and its history showed it was a terrible situation. And yet it has 4x fewer delays and nearly 4 times fewer stolen blocks than hush (although HUSH does not always perform this badly). It even beats Zcash by a wide margin in delays, when Zcash was stable in hashrate ... all while Masari is a micro-coin and it's difficulty varied. BTW, some are crediting Masari's doubling in price that's causing the difficulty to rise as being the result of the new difficulty. Masari's colored spikes make it look worse, but because it reacts fast, each of those delays and attacks are very brief compared to BCH and the others. Since Masari has T=120, the maximum BCH can be to be as good is N=60*(120/600)^0.3 = 37. See this article. |
So BCH should use WHM with N=40 and employ method 6 timestamp handling (limit +8xT and -7xT from previous timestamp). |
@zawy12 I completely agree that slow response is the biggest problem with the current BCH simple moving average. The combination of a price and difficulty change is detrimental; in addition, predictable difficulty changes are self-reinforcing. This is gameable and may explain why BCH is not more profitable than BTC anywhere close to the expected 50% of the time. The shape of the weight curve in any exponential moving average is steeper than linear with equivalent half-life. So how do you conclude that WHM is faster-responding than anything ema-based? |
In trying to answer your question, I realized the N in the EMAs is actually an honest-to-god "half life". So a "full life" is 2*N. So for evermore, let's throw a 2 into the equations to correct for this so others do not make a mistake in the direction we were headed. That is, you started investigating N=1/2 of 144 which makes me say "yippee" and Jacob and I showed it's really an N=100. But that's only a half-life, so it's not N like the SMA or WT. It's N=200 which I think we agree is not good (although it may equal performance of SMA with 144) Compare your EMA approximation and WT with N=2, when the previous block saw a t=T/2 solvetime. I use admittedly inflammatory language by saying "hash attack" and "blocks stolen". But it's only because "opportunistic mining" does not portray what small coins experience and "blocks acquired at cheap difficulty" is too cumbersome. So, I want to object to your viewpoint that miners are or can actively damage beyond simple, blind profit motive. You would be correct with "self-reinforcing" or "gaming" if there were oscillations in any of our algorithms. I see references here to oscillations and "ringing" but the tests I run on them try make this happen and I can't. That was my fear of EMA and WT at first. They predict current needed difficulty and no more. Trying to extrapolate to future blocks like hull's moving average is where trouble can begin. I think BCH can only be improved a little. But some days people will be waiting 4 hours when they really really needed that $10k in less than 2 hours and were planning on it. I don't think the difficulty has anything to do with the profitability problem (of which I have no knowledge). That kind of problem can occur at most at the 25% level if difficulty is swinging up a lot from big miners jumping on and off, which isn't happening here. Miners jump on for 1/3 of the N and then leave other miners with higher difficulty. But here, if they are doing it, they are only gaining about 5% if they jump on when it is accidentally low, and then leave others with only 5% accidentally high. Any 50% is from something else. |
I can't believe N=37 is safe for BCH. It's too much variation above 20% that would invite attacks. Here's a different line of reasoning to argue for the highest N. The charts above show BCH can do at least twice as good. Half as many delays and half as many blocks stolen are possible. Random variation is not hurting BCH. Delays and blocks stolen are proportional to N (due to linearly slower response) when random variation is not the cause. Therefore N needs to be 72 with the given SMA. WHM is a lot better in responding faster, but it also touches upon more random variation for a given N. So it should do better than cutting delays and blocks stolen in half with WHM and N=72. Looking at N=60, I see 4 times a week it would drop to 80% of the correct difficulty, inviting more hashers. Will price+fee dynamics happen as often? That is the goal: we want to attract miners with random variation in difficulty as much as they come due to price changes. Otherwise, it's not responding as fast as we want to price changes. I suspect price+fees vary more than this, so I believe this is higher than it should be. N=60 seems to be an excellent balance between my fear of random variation attracting miners and getting a fast response to price changes. N=37 had about twice as many drops over 80%, which is still within reason as to what price+fees is doing (and also hashrate competition varies). |
So here's my best algorithm for T=600 coins, and it sure is working well for a t=120 coin.
|
The "window" parameter to the ema algos is not the half-life: it's the simple moving avg "window" they aim to match. Long topic but the short of it is, the half-life is ln(2) (0.69) * the window: so for ema2-1d, with a window of 1 day (24*60*60), the half-life is ln(2) * 1 day = 16h38m. More generally, it's hard to compare these algos, given that they all have "responsiveness"/"window" parameters and the parameters sometimes have different meanings. I have three suggestions on how to compare them:
|
Yeah, I guess it was silly to hope the EMA taking about 2xN to reach the correct difficulty after a hashrate change was meaningful. It takes over 2xN for the EMA to fully correct to a step signal. It starts off good, but then turns skeptical well before it reaches the goal. And WHM / WT get up there faster than SMA. As far as differences in the EMAs, I don't think we should say there is any. I can't see any in all the plots I've done except the solvetime is perfect when the exact equation is used. What kind of differences are you looking for when the individual solvetime values are this close? |
Again, here are the simplest forms of the equations:
We've just been looking at a case of
|
@zawy12 Will you implement your WHM in this simulation (the one maintained in this repository)? Is your simulation available for others to run? |
For more direct comparing to your wt, it needs 4 line changes :
|
If we can concur on WHM with N=60 and the bad-timestamp handling it would be great. Then it needs to be converted to BTC code. There's resistance to using unsigned integers for some reason. I called the above WTZ because it does not have the same timestamp handling and adjustment as WHM. They are the same if their are no bad timestamps, although WTZ will be closer on avg solvetime. I'm running mining.py for first time to try to code WHM. Is 15 seconds for 20k blocks normal? Excel does 90k blocks (9 algorithms) with 9 charts printed side-by side in half a second. Would printing to file be faster? How do I code that?
|
In order to evaluate an algorithm, only 2 scenarios are really needed: ideal case of steady state hash rate with no random variation and worst case step functions. Weird ideas need to be checked against a forever ramping function. Miner behavior can be accurately modeled with a scenario that triggers a step function of size S when D drops x% and ends the step function when D raises y%. Your metric of the inadequacy of the algo is sum( HR/D if HR > D else D/HR) where D and HR baseline are scaled to 1. How do I do a steady state (no random variation) and a step function in mining.py? |
A couple of charts below comparing how different block times affect target (1/difficulty) under different algos: wtema, the various emas, and simpexp. simpexp is a new one I just added with a very simple rule:
simpexp is motivated by @zawy12's argument that the algo should just accept negative block times, but ensure that their effect on difficulty is offset by the subsequent large-positive. simpexp has this property - from the comments:
These charts convince me of a few things:
|
My post above shows WHM N=60 has better metrics to protect against miner behavior than even a "fast" EMA N=25 while keeping smoother difficulty. What makes better evidence? |
I understand that you've put months and months into comparing these. But simplicity counts for a lot too. wtema is literally 2 lines. What are the specific attacks that WHM (with its Maybe if you can add WHM and the attacks you have in mind here, the importance of choosing it over something like wtema will be clearer. Just my 2c! I don't make the decisions around here... |
We all make decisions that influence.
The post I linked to describes the attacks. I am unable to code the python well enough model them. I can't even figure out how to change algos from the default to test a WHM. I can't do a pull request if I can't test it, and I don't even know how to do a pull request But I've given the code above if someone wants to do it. After that, we still have a metrics issue on defining "best". You would have to read my articles to see the justification. Same thing goes with selecting the +/- timestamp limits and N. I selected N based on not wanting to see the difficulty drop more than 80% too often. That's based on the live data show below. Not wanting to "do better" to make the limit 90% with a higher N is a very complicated subject that I covered in an article, and it's based on looking at other charts that are not as obvious as the ones below. The timestamp limit justification is easier: I don't want coins to see a 10% to 50% drop in difficulty from a single bad timestamp, and +8xT was demonstrated above to allow recovery from even 50x miners leaving. |
Yes, I've tested all kinds of variations to come to the conclusion WHM N=60 can't be beat with another algorithm with different settings. I can see the charts for 5 different algorithms using 5 different settings in 30 seconds. Yes, simple code is nice, but if the more complicated code can be done safely, small coins really need it. If you want simple code for safety reason, then you would choose Digishield v3 without the MTP delay over EMAs because it's nearly as good and used in live coins a lot. Digishield v3 is Here's an example of why low N is good. (and why comparing like you say is important) |
That last example shows that WHM with N=37 responds more quickly to short block times than EMA with N=100. But we could just reduce EMA's N to make it equally responsive, right? If we do that - find the N for EMA that makes the two equally responsive in the above scenario - then what is the advantage of WHM? Perhaps there's some other attack that, of those two equally-responsive-to-the-above algos, WHM handles better? |
Sorry, I accidentally had the N=37. WHM comparison was an after thougth. My main point was that the standard digishield v3 is not so far behind ema, and even better if the wrong N is in place. I found something almost twice as good as the WHM. I first remembered to test my dynamic EMA and it beats WHM. Then I combined a low-N WHM with Digisheild's method of tempering a low-N SMA. It tied the dynamic EMA. I could not get tempered EMA to do better than EMA. Then I made the tempered WHM switch to low-N EMA if a high/low statistical event occurs. This is an especially exciting idea because it efficiently combines the best of each idea in a way that's ideally suited for our needs. We want fast initial response, without overshooting. This is important because we know miners will leave in droves if it overshoots. Tempered WHM does this, and it works great. But we can also can see when a big miner suddenly comes on or off by sudden solve time changes in 5 blocks. That's a come yes/no event in our situation. So a yes/ no coding is justified, if it's symmetrical and does not over shoot. EMA is perfectly suited for this because the statistical trigger is past on past data. I know from experience on dynamic SMAs that just switching to a lower N for my averaging will not have a net benefit for complex reasons. But the jaw-dropping realization here is that the EMA doesn't look at what triggered a request for its activation. EMA says "Huh? You think you have an unusual event? Well, i'm going to ignore your data and check for myself." It can begin immediately with the next block. I was able to think of an exploit for this: a big miner jumping on will trigger the event, and knowing our code, decides to throw a long timestamp at the low N EMA, knocking difficulty down. That's not cool. So we use deadalnix's(?) idea of median of last 3 timestamps but I'll just subtract it from a median of 3 chifted back only 1 block. So this requires every idea we've seen, carefully applied. I can't combine them effectively any other way. I've done a lot on getting simulating real coin/miner activity and getting a metric for it. I simulate a very typical 3x attack that I've seen in several coins. It's very typical as I've seen it in 4 coins. ( If it's 50x, the comparative results here will not change much, except that this dynamic algo will blow the other algorithms away because the event is easily detected and it has a low N. So the following is worst case for this algorithm. ) The attackers typically stay on until the hashrate has risen 35%. So the attack ends when that occurs, as the red bars below will show. Again, this attack example just for the metric and the relative results will not change much if the attack profile changes. Also, this attack is simulating price changes as well as accidental drops in D and accidental drops in hashrate that caused a low difficulty. In live coins the my avg 11 ST > 2xT delay percentage metric is about 1/3 of the hash attack metric, but the simulation here is brutal by making the attacks constant, so I have to scale the delays up and multiply by 5 instead of 3 (and delays are still weighted less than "blocks stolen" aka avg 11 ST indicate > 2x hashing is present). Delays and blocks stolen are connections: algos good at one are usually worse at the other. More attacking means more delays. So a sum of them and making them nearly equal for a single metric is completely justified. Here are the results. Lower is better.
Digishield is really not as good as WT as indicated above because I didn't fix it to be on a comparable scale. Digishield not shown in graphs for space reasons. SMA is even worse than 33%. The numbers in my table above more are accurate than these examples. I put together this algorithm while driving the kids some distance to school about 4 hours ago. So it's not merely made to "curve-fit" my testing. It's a theory that seems to win all three observational graphs. |
@zawy12, let's focus on specific questions:
Please give the short versions of the answers (≤3 lines). I want to understand, but I expect some people have just stopped reading. Also, I understand reimplementing your algos here may be painful - maybe you could just generate a chart like the ones I plotted above (new_target/old_target for block times -2h, -1h, ..., 7d) from your own code? Or just send me the numbers & I'll add them to the charts. |
I'm sure few read 1/10 of what I write. I don't understand your curves. |
Well, "It kicks ass on my tests" isn't the stuff that ACKs are made of. Unless you can briefly convey a specific attack or two that illustrate why it outperforms all parameterizations of wtema, it seems unlikely to be adopted here. The curves are just each algo's mapping from block time to (new_target/old_target). Eg, the flat grey line at top right of the first chart shows that in response to any block time ≥1d, emai-1d just multiplies target by 144 (not too nice...). |
I specified the exact math of the attacks and metrics and showed the results. I guess I did not explain how I chose N for each algorithm. ( I am not aware of any other parameterization for EMAs.) I chose N that results in attacks of about the same width (the red bars). If they are about the same width, it means the algorithms will respond about the speed to a price increase. So they are compared fairly starting from that basis. I could raise N for any of them to "win" in my test because there is no increase in hashrate unless the difficulty dips below 1. Then the metrics measure in their ability to not accidentally go too low too often, and how well they come back down, given the same general speed of response. I varied the low D that triggers start, the stop D, and the hashrate. These did not affect the results hardly any, except the best one had was able to shine if hashrate was large. I did not go below 0.8xD start or below 2x baseline hashrate attack. I can't prove I am not mistaken in an undetermined way without an infinite number of words. But you can falsify a statement. It's a lot less work to check a hash than it is to generate it. My falsifiable observation is that TWHM and TWHM-DEMA are the best we have when checked against the metrics I have theorized are the best. What is the competing contradicting theory? It seems like your charts would be different for different N. By requiring the widths of the red bars to be about the same I'm factoring out what seems to be a missing parameterization in your charts. In responding, I see the step response should be a lot better (this is a given in this controllers), and that it should be used to determine my N for each algorithm. I now see that my step responses above were not "fair". An equivalent step response between them means it will respond to price changes the same, and our whole goal is to balance speed of response to price and accidental changes without accidentally going too low to attract hashing. So I need to re-compare them to make sure TWHM and EMA were not getting an unfair advantange from not being responsive. |
The purpose of providing code here is to let people play with your algos & scenarios and convince themselves that both are realistic and bug-free. On "the charts would be different for different N", yes, that was the point of my #1 above:
All the algos in my chart responded to a 4h block by increasing target by between 15% (wtema-100) and 19% (emai-1d) - I figured that made it close enough to an apples-to-apples comparison. |
I posted the WHM algorithm above and almost two months ago as an issue under the title "Best algorithm so far". Masari implemented it and it is the best algorithm of the seven I follow. I think my baseline is going to be the ability to change 50% in 2 hours to a 10% price change (aka needed difficulty change). I mean the equation would be assigned an N to achieve meet but not exceed that goal. |
I standardized the N values by making their response to a 3x hash rate step function (50 block step) the same. I changed N until they all averaged the same ending difficulty. The tempered WHM failed to be an improvement. Otherwise, the rankings based on attracting hash attacks and causing delays come out the same. I used 300,000 blocks for the rankings. You can see the WHM-EMA responds faster than the others to the step function which causes its ranking to be under-estimates. It also gets even better scores if hash rate changes more than the 3x testing. The WHM-EMA uses the same N for the WHM below it (N=90) and then switches to an N=10 EMA if 10 blocks have a statistically-significant event. It uses that for 20 blocks after the last event before switching back to WHM.
|
Add support for a new wtema-72 DAA characterized by a single alpha parameter and no fixed window size. Results are similar to ema-1d from @jacob-eliosoff with differences in the adversarial dr100 and ft100 scenarios.
wtema-72:
wtema-72 approximates a 144-block window by setting weight half-life to 72 blocks. This results in a latest-block weight of ~1/104. ema-1d sets latest block weight to ~1/144 which is slower-responding.