-
Notifications
You must be signed in to change notification settings - Fork 26
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feat(evmstaking): implement epoched staking #96
Conversation
6d2c56c
to
1954660
Compare
ProcessDeposit(ctx context.Context, ev *bindings.IPTokenStakingDeposit) error | ||
ProcessWithdraw(ctx context.Context, ev *bindings.IPTokenStakingWithdraw) error |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
These two are not used in evmengine
module
1954660
to
17ce356
Compare
@Narangde |
d04622b
to
4e25327
Compare
|
||
if isNextEpoch { | ||
// process all queued messages | ||
if err := k.ProcessAllMsgs(ctx); err != nil { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
what if we have a lot msgs? any risk of block timeout here?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Epoch blocks usually take longer due to backlog calculations, e.g. Osmosis's daily epoch block takes up to a few minutes at worst. We should set settings to allow epoch blocks to exceed any timeout
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Right. we need to allow longer block time for epoch block.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
makes sense👍🏻
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
How are we thinking about setting a longer timeout for epoch blocks? I think the current timeout in config is set for all blocks
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm not sure we can set longer timeout for epoch block, but if possible, it could be good. Why don't we make improvement in another PR?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We should definitely increase the timeout. I would imagine a lot of operations need to be processed during the epoch block. Can be in another PR
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
comments for mocking usage
99c399e
to
e3589d7
Compare
I added validation of epoch duration for epoched staking in a6c4fd5. This is because if the unbonding period is shorter than the epoch duration, the unbonding is mature, but the staking keeper's |
Could you elaborate this more? Why completeUnbonding is not processed if unbonding period is shorter than the epoch duration? I thought withdrawals (full or partial) should happen regardlessly. |
FWIW, the current design implies that withdrawing a stake doesn't immediately lower the validator power, and only starts to unbond (for 21 days) at the start of each epoch. |
|
||
if err := k.processMsg(ctx, &qMsg); err != nil { | ||
log.Warn(ctx, "Failed to process queued message", err, "tx_id", string(qMsg.TxId)) | ||
return errors.Wrap(err, "process queued message") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think we need to continue
here, otherwise all queued messages after the error will not get processed. The same issue happened before while the evmstaking module processed events (incl. invalid ones) from the staking contract.
Also, we need to consider what it means for a queued message to fail. If a reward minting fails, why did it fail, and are there any side effects we need to resolve (e.g. bank module balance increasing)? In other words, gracefully handling failed messages in an epoch.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I agree that we continue processing messages. I just returned error here because it is used for our test codes. I will make it continue processing messages.
Also totally agree that we need to handle failed messages. This is also the part that I had the most trouble with. Since the case where a message fails can be quite complex to consider right now, I'm thinking of storing failed messages separately rather than handling them right away, so that we can handle them later appropriately for example via upgrade. What do you think?
return errors.Wrap(err, "process CreateValidator msg") | ||
} | ||
case *stype.MsgDelegate: | ||
if err := k.ProcessDepositMsg(ctx, unwrappedMsg); err != nil { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
In the future, maybe we can add an improvement where deposits are handled per block but only go into effect at the end of each epoch.
Because completeUnbonding is called in staking keeper's
|
a6c4fd5
to
b4dc947
Compare
@@ -144,7 +149,7 @@ func (k Keeper) ProcessStakingEvents(ctx context.Context, height uint64, logs [] | |||
continue | |||
} | |||
ev.StakeAmount.Div(ev.StakeAmount, gwei) | |||
if err = k.ProcessCreateValidator(ctx, ev); err != nil { | |||
if err = k.HandleCreateValidatorEvent(ctx, ev); err != nil { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can we combine all these events to a single enqueue since all of them share the same msg type
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, all types of events are converted to types.QueuedMessage
and enqueue to a single queue. Did you mean to combine all Handle*Event
to a single func?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't think we can combine all Handle*Event
function to a single function. Because each event has different fields. If we want to make a single function, we need to move the logics for handling events (converting each event to QueuedMessage
and enqueuing) out to ProcessStakingEvents
. If I misunderstand your comment, please correct me.
@@ -144,7 +149,7 @@ func (k Keeper) ProcessStakingEvents(ctx context.Context, height uint64, logs [] | |||
continue | |||
} | |||
ev.StakeAmount.Div(ev.StakeAmount, gwei) | |||
if err = k.ProcessCreateValidator(ctx, ev); err != nil { | |||
if err = k.HandleCreateValidatorEvent(ctx, ev); err != nil { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah
@@ -268,7 +270,7 @@ func (k Keeper) ProcessWithdraw(ctx context.Context, ev *bindings.IPTokenStaking | |||
|
|||
amountCoin, _ := IPTokenToBondCoin(ev.Amount) | |||
|
|||
log.Debug(ctx, "Processing EVM staking withdraw", | |||
log.Info(ctx, "EVM staking withdraw detected", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think we should keep this as debug. I'm thinking all tx level information are more for debugging purpose.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
For other events, the log level in Info
when the event is detected, thus I changed this to Info
as well for code consistency. I think Debug
is enough when the event is detected. How about we changing all log for event detection to Debug
level?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@edisonz0718, could you check this comment? As you said, it seems like to be better that we use Debug
for all tx log as done in 36a50a2
client/x/evmstaking/keeper/abci.go
Outdated
delEvmAddr, | ||
entry.amount.Uint64(), | ||
)) | ||
partialWithdrawals, err := k.ExpectedPartialWithdrawals(ctx) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think the partial withdrawal should still happen per block.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
You're right.
"min_partial_withdrawal_amount": 100000000 | ||
} | ||
"min_partial_withdrawal_amount": 100000000, | ||
"epoch_identifier": "minute" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
How do we add epoch duration in genesis
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Only epoch identifier is needed in evmstaking module. Epoch info such as epoch duration is in epochs
module, and it can be added in genesis.json for now. There is no way to add epoch info as cosmos-sdk do.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It will be great to add configuration in genesis.json to change the number of blocks per epoch. It helps to perform tests in reasonable amount of time.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The epoch duration can be set in epochs
module. For local or mininet test, we can use shorter epoch by setting shorter epoch. emvstaking
module depends on epochs
module
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If we can set the duration in genesis.json, it will make setting different networks easier. Otherwise, different networks need to manage different binaries.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think it's ok if each network has different genesis.json file (to be exact, different epoch identifier for epoch staking).
If we add duration in evmstaking
param (and add to genesis.json), we don't need epochs
module in evmstaking
module and we don't need epochs
module in Story for now. evmstaking
module can be independent with epochs
module. The reason why I have added and used epochs
module for epoch staking is that I wanna make epochs
module manage epoch stuffs in our app for modularity. Do you think that it better than current design that evmstaking
module have independent epoch info for epoch staking?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah I think we should add epoch info we use for evmstaking to epochs module's genesis file. I guess what you meant is that it should be in another PR?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ah, I got it. I may misunderstand your comment.
Sure! I will add it in the following PR.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Done in #181
"min_partial_withdrawal_amount": 100000000, | ||
"epoch_identifier": "minute" | ||
}, | ||
"epoch_number": "0" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is this the starting number of epoch?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It is the epoch number currently used in evmstaking
module for epoch staking. This value is used for checking the next epoch is started in epochs
module by comparing epoch number.
"min_partial_withdrawal_amount": 100000000 | ||
} | ||
"min_partial_withdrawal_amount": 100000000, | ||
"epoch_identifier": "minute" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It will be great to add configuration in genesis.json to change the number of blocks per epoch. It helps to perform tests in reasonable amount of time.
823ad83
to
22d3af8
Compare
if err != nil { | ||
return nil, errors.Wrap(err, "map delegator pubkey to evm address") | ||
// init message queue | ||
if err := k.MessageQueue.Initialize(ctx); err != nil { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
will all processed msgs pruned after re-init the MessageQueue
?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes right.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do we initialize every epoch? I thought there is a max amount that one epoch can process and the rest will flow to the next epoch
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
In the current PR, there is no limit in size of queue, that is, process all queued msgs.
With limitation, we should not initialize queue as you said.
22d3af8
to
44d0595
Compare
unbondedEntries = append(unbondedEntries, UnbondedEntry{ | ||
validatorAddress: dvPair.ValidatorAddress, | ||
delegatorAddress: dvPair.DelegatorAddress, | ||
amount: amt.Amount, | ||
}) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I thought we have a limit on the number of operations per unbonding period or epoch. Do we know where that is enforced?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
As commented here, no limit in this PR. Will do in the following PR.
|
||
if isNextEpoch { | ||
// process all queued messages | ||
if err := k.ProcessAllMsgs(ctx); err != nil { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We should definitely increase the timeout. I would imagine a lot of operations need to be processed during the epoch block. Can be in another PR
if err != nil { | ||
return nil, errors.Wrap(err, "map delegator pubkey to evm address") | ||
// init message queue | ||
if err := k.MessageQueue.Initialize(ctx); err != nil { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do we initialize every epoch? I thought there is a max amount that one epoch can process and the rest will flow to the next epoch
Based on the comments, there are still two changes to be made:
You can create issues and add them in a separate PR if it's a bigger change. |
Thank you for your clarification. I will make another PR for the above things. |
36a50a2
to
9c976f4
Compare
81e0517
to
71fbbde
Compare
Binary uploaded successfully 🎉📦 Version Name: 0.10.1-unstable-91a4806 |
This reverts commit 91a4806.
Implemention of epoched staking in
evmstaking
module.With epoched staking, the messages are handled in 2 steps: queueing & processing. The messages from EL related to validator set are not executed immediately, but every epoch.
Queueing messages
When events related to validator set (
CreateValidator
,Deposit
,Withdraw
,Redelegate
, andUnjail
) are emitted from EL, the messages are queued ink.MessageQueue
.Processing messages
In
EndBlock
ofevmstaking
module, it is checked every block if the next epoch is started or not. If the next epoch is started, it executes all queued messages and return the updated validator set.Epoch info
The epoch is managed in
epochs
keeper, so it needs to be injected toevmstaking
keeper. Theepoch_identifier
for epoched staking is added to params ofevmstaking
.issue: none