-
Notifications
You must be signed in to change notification settings - Fork 50
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
EFM Recovery Service Event and Transaction #440
base: feature/efm-recovery
Are you sure you want to change the base?
EFM Recovery Service Event and Transaction #440
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Copying over the main comment from the previous review: #420 (comment)
The second conditional case of the recover_epoch
transaction (when unsafeAllowOverwrite
is false) doesn't use the recoveryEpochCounter
value at all. But if we go down that code path and FlowEpoch.currentEpochCounter != recoveryEpochCounter
, we know the recovery process will fail.
So I think we should use recoveryEpochCounter
in the second codepath as well. We can explicitly check that FlowEpoch.currentEpochCounter == recoveryEpochCounter
, for example as a precondition, and panic if this doesn't hold.
contracts/epochs/FlowEpoch.cdc
Outdated
/// Create new EpochMetadata for the recovery epoch with the new values | ||
let newEpochMetadata = EpochMetadata( | ||
/// Increment the epoch counter when recovering with a new epoch | ||
counter: FlowEpoch.proposedEpochCounter(), |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Copying comment from previous PR: #420 (comment).
Line 624 is starting a multi-line expression, constructing an instance of EpochMetadata.
My suggestion is to indent the lines within that expression:
let newEpochMetadata = EpochMetadata( /// Increment the epoch counter when recovering with a new epoch counter: FlowEpoch.proposedEpochCounter(), seed: randomSource, startView: startView, endView: endView, stakingEndView: stakingEndView, // The following fields will be overwritten in `calculateAndSetRewards` below totalRewards: 0.0, collectorClusters: [], clusterQCs: [], dkgKeys: dkgPubKeys)
Similar to what we do on, for example, line 646-658
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
// Specifically, we execute an epoch recover transaction and confirm both scenarios are true; | ||
// - epoch recover that specifies unsafeAllowOverwrite = false increments the epoch counter effectively starting a new epoch. | ||
// - epoch recover that specifies unsafeAllowOverwrite = true overwrites the current epoch and does not increment the counter. | ||
func TestEpochRecover(t *testing.T) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Copying comment from previous PR: #420 (comment).
Once we are passing in the recoveryEpochCounter, we must use that value as the epoch counter for the recovery epoch 100% of the time, otherwise we will produce an incompatible recovery epoch state. recoveryEpochCounter is the source of truth.
In the commit which was merged to the feature branch, the logic is:
- if unsafeAllowOverwrite is set, then use recoveryEpochCounter as the counter for the recovery epoch
- otherwise, ignore the recoveryEpochCounter input and assume that the the smart contract and protocol state epoch counters are synchronized
I think conditional branch 2 is unsafe and we should respect the recoveryEpochCounter codepath in all cases. Sorry if I didn't explain this properly in the previous review. The purpose of including recoveryEpochCounter at all is to make sure it is used as the epoch counter for the recovery epoch being constructed in the transaction, in all circumstances.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is coming along nicely!
My main suggestion in this review is to expand the test coverage to cover more edge cases (suggestions enumerated here). The existing tests are quite verbose, so I think it would be worthwhile to
invest time in factoring out some of the common test logic when adding test cases. After we get Josh's input on the implementation changes, I'd be OK with implementing additional test coverage in a separate PR. If you'd like to do that, let me know.
contracts/epochs/FlowEpoch.cdc
Outdated
/// Create new EpochMetadata for the recovery epoch with the new values | ||
let newEpochMetadata = EpochMetadata( | ||
/// Increment the epoch counter when recovering with a new epoch | ||
counter: FlowEpoch.proposedEpochCounter(), |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
counter: FlowEpoch.proposedEpochCounter(), | |
counter: recoveryEpochCounter, |
contracts/epochs/FlowEpoch.cdc
Outdated
self.stopEpochComponents() | ||
let randomSource = FlowEpoch.generateRandomSource() | ||
|
||
let numViewsInStakingAuction = FlowEpoch.configurableMetadata.numViewsInStakingAuction |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We're accepting stakingEndView
as an input (implicitly specifying a desired staking auction length), so we should not also use this potentially conflicting representation for the same information. Otherwise we could persist a different staking end view in the smart contract storage compared to what we emit in the service event.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
lib/go/test/flow_epoch_test.go
Outdated
newEndView := endView + 1 | ||
args[3] = CadenceUInt64(newEndView) | ||
|
||
// avoid initializing a new epoch, set unsafeAllowOverwrite to true |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It would be great to explicitly separate this out into a separate test case.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@@ -1516,3 +1517,182 @@ func TestEpochReset(t *testing.T) { | |||
assertEqual(t, CadenceUFix64("249999.99300000"), result) | |||
}) | |||
} | |||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think we should test a broader variety of cases:
- Test that we can execute Epoch Recovery while the smart contract is in each epoch phase (staking phase, setup phase, committed phase)
- In particular, I'm thinking of the setup and committed phases, where a proposed
EpochMetadata
entry for the next epoch is already stored in that state.
- In particular, I'm thinking of the setup and committed phases, where a proposed
- Test different inputs of
recoveryEpochCounter
andFlowEpoch.currentEpochCounter
.- Validate that overwrite attempts panic if
unsafeAllowOverwrite
is false - Validate that
recoveryEpochCounter
outside [currentCounter, proposedCounter] panics, regardless ofunsafeAllowOverwrite
- Validate that overwrite attempts panic if
- Test that staking rewards are paid out exactly once per epoch.
- If we are overwriting the current epoch, then staking rewards should not be paid out.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
contracts/epochs/FlowEpoch.cdc
Outdated
message: "recovery epoch counter should equal current epoch counter" | ||
) | ||
self.stopEpochComponents() | ||
let numViewsInStakingAuction = FlowEpoch.configurableMetadata.numViewsInStakingAuction |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Same comment here about numViewsInStakingAuction
potentially conflicting with stakingEndView-startView
.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@@ -432,6 +497,242 @@ access(all) contract FlowEpoch { | |||
FlowEpoch.account.storage.load<Bool>(from: /storage/flowAutomaticRewardsEnabled) | |||
FlowEpoch.account.storage.save(enabled, to: /storage/flowAutomaticRewardsEnabled) | |||
} | |||
|
|||
access(self) fun emitEpochRecoverEvent(epochCounter: UInt64, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Could you add a short documentation for this function? In particular I would note that inputs are not validated (caller must validate with recoverEpochPreChecks).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
related to Jordan's comment:
In the spirit of defensive programming, I would suggest to call recoverEpochPreChecks
in the body of emitEpochRecoverEvent
. Thereby, we reduce the risks of accidentally forgetting the pre-checks in a future refactoring.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
contracts/epochs/FlowEpoch.cdc
Outdated
numViewsInStakingAuction: UInt64, | ||
numViewsInDKGPhase: UInt64, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I would suggest omitting both these parameters:
- numViewsInDKGPhase, because it isn't used in the caller and can just be read from the global config here
- numViewsInStakingAuction because it could potentially conflict with
stakingEndView-startView
, causing us to compute incorrect values for DKG phase views below. Instead, we should just usestakingEndView
in the computation to avoid the potential conflict
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thank you for the nice work. Appreciate the multitude of smaller refectorings, where you have moved auxiliary code into little service methods -- that certainly improves readability of the code.
I have added various suggestions for extending the documentation. However, given my very limited knowledge of cadence and the epoch smart contracts, I don't feel sufficiently confident in my abilities to spot potential problems/errors to approve this PR.
significant challenge [update: its not a big risk; see Jordan's comment below] that I noticed:
- the
FlowEpoch
smart contract offers two entry points for recovery:recoverNewEpoch
which requires that the counter for the recovery epoch matches the smart contract's current epochrecoverCurrentEpoch
enforces that counter for the recovery epoch matches is one bigger than the smart contract's current epoch
- I think this places strong limitations on the scenarios we can successfully recover from (specifically the time frame in which a recovery must be successful). Lets unravel this a bit:
- so initially we assume that the Protocol State and the Epoch Smart Contract are on the happy path: both counters are (largely) in sync
- then there is a problem and the Protocol State goes into EFM. That means for the running network, where the protocol state is the source of truth that determines operation, the network remains on the epoch counter
N
- However, while the protocol state stays on epoch
N
(extending it until successful recovery), the smart contract can continue to progress through its speculative epochs. - I think it is very likely that failures will occur relatively close to the desired epoch switchover, because the Epoch Setup phase only ends a few hours before the target transition and that is where problems typically occur. Lets say its 3 hours before the target transition and the protocol state goes into EFM and stays at epoch
N
. - The protocol state continues its work and enters epoch
N+1
. Everyone is stressed because the network is in EFM, some people might be OOO, the engineers are doing the best they can. The engineers trying to recover the epoch lifecycle know that they have to specify the next epoch: They query the smart contract, which tells them the system is currently in epochN+1
. So the engineers specify epochN+2
and callrecoverNewEpoch
. The smart contract is happy, and emits a recovery event for epochN+2
and enters epochN+2
... but the protocol state rejects the recovery because it is still in epochN
and expects epochN+1
to be specified. And then we are screwed: the protocol state must receive a recovery epochN+1
but the smart contract is already atN+2
, it only accepts recovery data for epochs with counter ≥N+2
! ☠️ - different scenario: due to typos, stress and unfamiliarity with the recovery process the first two calls to
recoverNewEpoch
emit an event (each increasing the counter) which are both rejected. We end up in a similar scenario: the smart contract's epoch counter has already progressed beyond the expected value for the dynamic protocol state. - Other scenario: too many partner nodes are offline and we would like to get them back online before attempting an epoch recovery ... reaching out and helping the partners might take some time. The network is running fine (just saying in its current EFM epoch). We decide to leave the system in EFM for more than a week (presumably nothing bad will happen), but forget to call
epochReset
... so after a week the smart contract is now in epochN+2
while the Protocol State s still inN
.
Essentially our current smart contract implementation makes the very limiting assumption that the Protocol State's Epoch counter can be at most one behind the smart contract. Otherwise, we have no means for recovery.
Lets keep in mind that we are implementing a disaster prevention mechanism here: its very rare so no one really has much experience with it, occurrences of disasters cannot be planned for, people are stressed and engineers with the deep background might be unavailable, the first EFM might happen in a year, when we have already forgotten some of the critical but subtle limitations.
Hence, I am strongly of the opinion that this process should be as fault-proof as possible:
- multiple/many failed recovery attempts should be possible
- the system should provide ample time for successful recovery (certainly more than a week)
- it should be nearly impossible to for failed recovery attempts to break anything (no matter how broken the inputs are)
I think we are pretty close but have two main hurdles:
-
We should prepare for the scenario where the protocol state is in EFM epoch
N
but the smart contract believes the system is in epochN+k
for any integerk
. That would be something to solve as part of this PR (or a subsequent smart contract PR). -
ideally, the fallback state machine guarantees that a successful RecoverEpoch event always is a valid epoch configuration. The recovery parameters might be manually set, so the risk of human error should be mitigated. What is missing is checking:
- that the cluster QCs are valid QCs for each collector cluster
- DKG committee has sufficient intersection with the consensus committee to allow for live consensus
This is out of scope of this PR.
As usual, we should be weighing how much engineering time that actually would take to implement. Nevertheless, it deeply worries me that we have a bunch of subtle footguns in our implementation, in that we might irreparably break mainnet in case we violate one of the several subtle constrains (either by human error, or even worse by not acting for only a week).
Also cc @durkmurder @jordanschalm for visibility, comments and thoughts.
contracts/epochs/FlowEpoch.cdc
Outdated
) | ||
} | ||
|
||
/// Stops epoch components. If the configuration is a valid configuration the staking auction, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think this sentence is broken:
If the configuration is a valid configuration the staking auction,
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
|
||
access(self) fun emitEpochRecoverEvent(epochCounter: UInt64, | ||
startView: UInt64, | ||
stakingEndView: UInt64, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
value not used. Related to Jordan's comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
lib/go/test/epoch_test_helpers.go
Outdated
dkgPhase3FinalView uint64 | ||
targetDuration uint64 | ||
targetEndTime uint64 | ||
clusterQCVoteDataLength int |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I am not entirely sure, but based on the implementation that would be the number of cluster QCs (or more generally, the number of clusters). Would suggest to use a more specific field name:
clusterQCVoteDataLength int | |
numberClusterQCs int |
} | ||
} | ||
|
||
func convertClusterQcsCdc(env templates.Environment, clusters []cadence.Value) []cadence.Value { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
a minimal documentation would be great.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Responding to Alex's comment here 👇
You outlined a few scenarios in your comment, but each of them relies on the smart contract continuing to transition through speculative epochs without the Protocol State following suit. In practice the smart contract transition process provides a strong guarantee that Smart Contract Transition Logic
Outstanding ProblemsInvoking
|
Co-authored-by: Alexander Hentschel <alex.hentschel@flowfoundation.org> Co-authored-by: Jordan Schalm <jordan@dapperlabs.com>
Co-authored-by: Jordan Schalm <jordan@dapperlabs.com> Co-authored-by: Alexander Hentschel <alex.hentschel@flowfoundation.org>
- replace numViewsInStakingAuction with stakingEndView - startView - don't accept numViewsInDKGPhase as a parameter read it from configurable epoch metadata
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Really nice expansion of the test coverage -- thank you.
Summary of feedback:
- If I'm understanding correctly, we don't have a test case that executes recovery during the staking phase -- I think we should add this before merging
- I added some questions about the last test case (we're doing two recoveries back-to-back and I'm not sure why)
contracts/epochs/FlowEpoch.cdc
Outdated
let numViewsInStakingAuction = stakingEndView - startView | ||
let numViewsInDKGPhase = FlowEpoch.configurableMetadata.numViewsInDKGPhase | ||
let dkgPhase1FinalView = startView + numViewsInStakingAuction + numViewsInDKGPhase - 1 | ||
let dkgPhase2FinalView = startView + numViewsInStakingAuction + (2 * numViewsInDKGPhase) - 1 | ||
let dkgPhase3FinalView = startView + numViewsInStakingAuction + (3 * numViewsInDKGPhase) - 1 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
let numViewsInStakingAuction = stakingEndView - startView | |
let numViewsInDKGPhase = FlowEpoch.configurableMetadata.numViewsInDKGPhase | |
let dkgPhase1FinalView = startView + numViewsInStakingAuction + numViewsInDKGPhase - 1 | |
let dkgPhase2FinalView = startView + numViewsInStakingAuction + (2 * numViewsInDKGPhase) - 1 | |
let dkgPhase3FinalView = startView + numViewsInStakingAuction + (3 * numViewsInDKGPhase) - 1 | |
let numViewsInDKGPhase = FlowEpoch.configurableMetadata.numViewsInDKGPhase | |
let dkgPhase1FinalView = stakingEndView + numViewsInDKGPhase | |
let dkgPhase2FinalView = dkgPhase1FinalView + numViewsInDKGPhase | |
let dkgPhase3FinalView = dkgPhase2FinalView + numViewsInDKGPhase |
We can simplify the calculation here by adding to the staking end view directly.
let dkgPhase2FinalView = dkgPhase1FinalView + self.configurableMetadata.numViewsInDKGPhase | ||
let dkgPhase3FinalView = dkgPhase2FinalView + self.configurableMetadata.numViewsInDKGPhase |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
let dkgPhase2FinalView = dkgPhase1FinalView + self.configurableMetadata.numViewsInDKGPhase | |
let dkgPhase3FinalView = dkgPhase2FinalView + self.configurableMetadata.numViewsInDKGPhase | |
let dkgPhase2FinalView = dkgPhase1FinalView + self.configurableMetadata.numViewsInDKGPhase | |
let dkgPhase3FinalView = dkgPhase2FinalView + self.configurableMetadata.numViewsInDKGPhase |
numStakingViews uint64 // num views for staking auction | ||
numDKGViews uint64 // num views for DKG phase | ||
numClusters uint64 // num collector clusters | ||
numEpochAccounts int // num collector clusters |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
numEpochAccounts int // num collector clusters | |
numEpochAccounts int // num accounts to setup for staking |
Comment is duplicated from line above.
) | ||
args := getRecoveryTxArgs(env, ids, startView, stakingEndView, endView, targetDuration, targetEndTime, epochCounter) | ||
// avoid using the recover epoch transaction template which has sanity checks that would prevent submitting the invalid epoch counter | ||
code := ` |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We could stick this in a .cdc
file and use go:embed
instead of putting the script inline. For example this variable embeds this file in flow-go.
cadence.NewArray(collectorClusters), // collectorClusters | ||
cadence.NewArray(clusterQcVoteData), // clusterQCVoteData |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
cadence.NewArray(collectorClusters), // collectorClusters | |
cadence.NewArray(clusterQcVoteData), // clusterQCVoteData | |
cadence.NewArray(collectorClusters), | |
cadence.NewArray(clusterQcVoteData), |
Suggesting to remove the comments because they're identical to the variable names
// Perform epoch recovery with a new epoch and epoch recover overwriting the current epoch. | ||
t.Run("Can recover the epoch and have everything return to normal", func(t *testing.T) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
// Perform epoch recovery with a new epoch and epoch recover overwriting the current epoch. | |
t.Run("Can recover the epoch and have everything return to normal", func(t *testing.T) { | |
// Perform epoch recovery by transitioning into a new epoch (counter incremented by one) | |
t.Run("Can recover the epoch with a new epoch", func(t *testing.T) { |
endView uint64 = 160 | ||
targetDuration uint64 = numEpochViews | ||
// invalid epoch counter when recovering the current epoch the counter should equal the current epoch counter | ||
epochCounter uint64 = startEpochCounter + 100 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
epochCounter uint64 = startEpochCounter + 100 | |
epochCounter uint64 = startEpochCounter + 1 |
Suggesting to test the more likely problematic input (the the one that does work with the other version of recovery)
endView uint64 = 160 | ||
targetDuration uint64 = numEpochViews | ||
// invalid epoch counter when recovering the current epoch the counter should equal the current epoch counter | ||
epochCounter uint64 = startEpochCounter + 100 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
epochCounter uint64 = startEpochCounter + 100 | |
epochCounter uint64 = startEpochCounter |
Likewise, let's test the case that's most likely to cause problems.
}) | ||
}) | ||
|
||
t.Run("Recover epoch transaction panics when recovery epoch counter is less than currentCounter and unsafeAllowOverwrite is false", func(t *testing.T) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Regarding the set of test cases about when we expect the transaction to panic, I feel like they can be structured to more clearly communicate our expectations.
The expectations are simple:
- if you are recovering by overwriting the current epoch, you must pass in exactly
recoveryEpochCounter = currentEpochCounter
andunsafeAllowOverwrite = true
- if you are recovering by transitioning into a new epoch, you must pass in exactly
recoveryEpochCounter = currentEpochCounter+1
andunsafeAllowOverwrite = true
- every other possible combination will panic
These expectations are communicated well by the descriptions of the first two test cases:
(1)
t.Run("Panics when recovering a new epoch and recover epoch counter is not equal to the current epoch counter + 1", func(t *testing.T) {
(2)
t.Run("Panics when recovering the current epoch and recover epoch counter is not equal to the current epoch counter", func(t *testing.T) {
The subsequent two test cases add more coverage by testing more input values (👍), but their descriptions describe subsets of the expectations described by (1) and (2). For me at least this made it harder to understand as a whole. I think the tests would be clearer if we added the invalid inputs we want to test within those two first test cases, where we fully define the range of valid input values that we are testing.
Suppose currentEpochCounter
value. Then, for example, (1) could test recoveryEpochCounter
inputs C, C-1, C+2. (2) could test recoveryEpochCounter
inputs C-1, C+1.
For each of (1) and (2), we can extract the test functionality into a closure and then do something like:
invalidEpochCounters := []uint64{C, C-1, C+2} // C+1 is the only valid input
for _, epochCounter := range invalidEpochCounters {
// run the test case
}
} | ||
runWithDefaultContracts(t, epochConfig, func(b emulator.Emulator, env templates.Environment, ids []string, idTableAddress flow.Address, IDTableSigner sdkcrypto.Signer, adapter *adapters.SDKAdapter) { | ||
// Advance to epoch Setup and make sure that the epoch cannot be ended | ||
advanceView(t, b, env, idTableAddress, IDTableSigner, 1, "EPOCHSETUP", false) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Unless I'm missing one, all of the test cases transition into the EpochSetup phase prior to executing the recovery. I would like to have at least one test case where we execute the recovery while in the Staking phase.
was called "publish participant" but was actually publishing admin
@jordanschalm It looks like there are a lot of unresolved comments from you on this PR. Since Khalil is leaving on Friday, I'm just wondering who is going to do that and when it will get done. I also don't really know if I want to review all of this again until the comments have all been actioned. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm also a little confused about the branches here. This is targeting the feature/efm-recovery
branch, but it looks like this contains all the upgrades for efm recovery. Should this be targeting master instead?
access(all) let voterIDs: [String] | ||
|
||
init(aggregatedSignature: String, voterIDs: [String]) { | ||
self.aggregatedSignature = aggregatedSignature |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Are there any pre-conditions we can do here to verify anything?
@joshuahannan I will address the feedback at the end of this week and leave a handover comment for @jordanschalm . This took longer because there is an extremely long feedback loop for cadence PR's. We expect an audit type of review from you, "will this break anything, can this be exploited" that type of review can be done while feedback is addressed. It's nice to have feedback from everyone so it can all be addressed at once, that makes the feedback loop faster. cc: @AlexHentschel |
No. feature/efm-recovery is the branch we use both here in this repo and on flow-go to contain all efm feature related changes. When this PR is approved it will be merged to feature/efm-recovery. We also may make additional PR's and changes that will go into feature/efm-recovery . When all issues are covered feature/efm-recovery will be merged to master, same feature branch strategy we use throughout flow. |
Thanks for the clarifications! The business logic feedback changes could affect the security of it though, so my security audit review of it won't really matter if there are business logic changes to it after I review, so I'll probably still wait until all the changes have been actioned from Jordan's comments. Thank you for staying on top of it and I apologize for the slow review times during the lead up to Crescendo |
since it is specifically enforcing the named Invariant (1), and checking both empty and non-empty submissions.
Merged changes from #441 into this PR. |
… correct EpochRecover service event
@jordanschalm I have made some changes to submit group key separately so the API was uniform with data structure of |
Co-authored-by: Tarak Ben Youssef <50252200+tarakby@users.noreply.github.com> Co-authored-by: Joshua Hannan <hannanjoshua19@gmail.com>
…ontracts into jord/6213-dkg-mapping
…nsaction (also regenerate) Conflicts: contracts/epochs/FlowDKG.cdc contracts/epochs/FlowEpoch.cdc lib/go/contracts/internal/assets/assets.go lib/go/test/epoch_test_helpers.go lib/go/test/test.go
This PR updates the FlowEpoch smart contract to support recovering the network while in Epoch Fallback Mode. It adds a new service event EpochRecover which contains the metadata for the recovery epoch. This metadata is generated out of band using the bootstrap utility util epoch efm-recover-tx-args onflow/flow-go#5576 and submitted to the contract with the recovery_epoch.cdc transaction. The FlowEpoch contract will end the current epoch, start the recovery epoch and store the metadata for the recovery epoch in storage. This metadata will then be emitted to the network during the next heartbeat interval.
Reopening original PR: #420