Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Khalil/6959 Recover Epoch transaction args script #5576

Merged
merged 57 commits into from
Apr 17, 2024
Merged
Show file tree
Hide file tree
Changes from 15 commits
Commits
Show all changes
57 commits
Select commit Hold shift + click to select a range
311115f
move reusable util funcs to common package
kc1116 Mar 22, 2024
7d09f47
generate tx args
kc1116 Mar 22, 2024
ffbd640
add happy path tests
kc1116 Mar 29, 2024
45fcead
Update cmd/bootstrap/cmd/keys.go
kc1116 Apr 1, 2024
18f7aec
Update keys.go
kc1116 Apr 1, 2024
c82ffbc
move NotEjected filter to filter package
kc1116 Apr 1, 2024
321da97
get subsets of internal collectors and partner collectors from snapshot
kc1116 Apr 1, 2024
9a4542d
Update cmd/util/cmd/epochs/cmd/recover.go
kc1116 Apr 2, 2024
91d6599
Update cmd/util/cmd/epochs/cmd/recover.go
kc1116 Apr 2, 2024
051629d
Update cmd/util/cmd/epochs/cmd/recover.go
kc1116 Apr 2, 2024
1399e18
Update cmd/util/cmd/epochs/cmd/recover.go
kc1116 Apr 2, 2024
2b9c152
Update recover.go
kc1116 Apr 2, 2024
3c8ee33
epoch counter should be an input
kc1116 Apr 2, 2024
d9db0eb
smart contract should generate random source with revertibleRandom
kc1116 Apr 2, 2024
f48b5a7
Update cmd/util/cmd/epochs/cmd/recover.go
kc1116 Apr 2, 2024
0cb7cae
Update cmd/util/cmd/epochs/cmd/recover.go
kc1116 Apr 2, 2024
884fdbf
add epoch-length and epoch-staking-phase-length
kc1116 Apr 2, 2024
6aacb01
use for range loop
kc1116 Apr 2, 2024
82c5803
dkg group key should be the first key in the array
kc1116 Apr 2, 2024
1d67790
Update cmd/util/cmd/common/clusters.go
kc1116 Apr 2, 2024
1c86a58
Merge branch 'khalil/6959-efm-recvery-epoch-data-generation' of githu…
kc1116 Apr 2, 2024
8d6611f
add godoc for ConstructRootQCsForClusters
kc1116 Apr 2, 2024
263974e
add godoc for *PartnerInfo util funcs
kc1116 Apr 2, 2024
c6a1989
document GetSnapshotAtEpochAndPhase arguments
kc1116 Apr 2, 2024
32df104
refactor fatal level logs
kc1116 Apr 2, 2024
4b3dae6
fix node ids in test root snapshot fixture
kc1116 Apr 2, 2024
3a8ec88
remove debug logs
kc1116 Apr 2, 2024
0060ce9
Update cmd/util/cmd/common/clusters.go
kc1116 Apr 3, 2024
3a2b99e
Update cmd/util/cmd/common/clusters.go
kc1116 Apr 3, 2024
fcb1006
Apply suggestions from code review
kc1116 Apr 3, 2024
340d3ab
Apply suggestions from code review
kc1116 Apr 3, 2024
6070f4e
add integration tests skeleton
kc1116 Apr 8, 2024
4b71333
Merge branch 'khalil/6959-efm-recvery-epoch-data-generation' of githu…
kc1116 Apr 8, 2024
b491179
Update service_events_fixtures.go
kc1116 Apr 8, 2024
fb3ec2c
lint fix
kc1116 Apr 8, 2024
a8cfa20
Update recover.go
kc1116 Apr 8, 2024
22511ab
fix lint
kc1116 Apr 8, 2024
2eb6b11
fix imports
kc1116 Apr 8, 2024
ad9cede
fix cmd unit tests add missing "wrote file" logs
kc1116 Apr 9, 2024
2701102
remove extra SN nodes
kc1116 Apr 9, 2024
e07c96a
Update recover_epoch_efm_test.go
kc1116 Apr 9, 2024
6bc838f
Merge branch 'master' into khalil/6959-efm-recvery-epoch-data-generation
kc1116 Apr 9, 2024
c27754f
Update integration/tests/epochs/base_suite.go
kc1116 Apr 15, 2024
47024da
use 0 as a default value force the user to provide values
kc1116 Apr 15, 2024
603d1da
Apply suggestions from code review
kc1116 Apr 15, 2024
b40197e
Update cmd/util/cmd/common/node_info.go
kc1116 Apr 15, 2024
6053858
Update cmd/util/cmd/common/node_info.go
kc1116 Apr 15, 2024
a7e6e2c
Apply suggestions from code review
kc1116 Apr 15, 2024
f823c45
Update cmd/util/cmd/common/node_info.go
kc1116 Apr 15, 2024
47361e2
Merge branch 'khalil/6959-efm-recvery-epoch-data-generation' of githu…
kc1116 Apr 15, 2024
5e3dffd
add sanity check ensure all node weights are equal when generating cl…
kc1116 Apr 15, 2024
897809d
Merge branch 'master' into khalil/6959-efm-recvery-epoch-data-generation
kc1116 Apr 15, 2024
a55fda1
Update node_info.go
kc1116 Apr 16, 2024
3ee695d
Merge branch 'khalil/6959-efm-recvery-epoch-data-generation' of githu…
kc1116 Apr 16, 2024
ab88472
Update clusters.go
kc1116 Apr 16, 2024
12e3534
Update clusters.go
kc1116 Apr 16, 2024
e64ad8a
Update clusters.go
kc1116 Apr 16, 2024
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion cmd/bootstrap/cmd/db_encryption_key.go
Original file line number Diff line number Diff line change
Expand Up @@ -56,5 +56,5 @@ func dbEncryptionKeyRun(_ *cobra.Command, _ []string) {
log.Fatal().Err(err).Msg("failed to write file")
}

log.Info().Msgf("wrote file %v", dbEncryptionKeyPath)
log.Info().Msgf("wrote file %s/%s", flagOutdir, dbEncryptionKeyPath)
}
2 changes: 2 additions & 0 deletions cmd/bootstrap/cmd/dkg.go
Original file line number Diff line number Diff line change
Expand Up @@ -43,6 +43,7 @@ func runBeaconKG(nodes []model.NodeInfo) dkg.DKGData {
if err != nil {
log.Fatal().Err(err).Msg("failed to write json")
}
log.Info().Msgf("wrote file %s/%s", flagOutdir, fmt.Sprintf(model.PathRandomBeaconPriv, nodeID))
}

// write full DKG info that will be used to construct QC
Expand All @@ -56,6 +57,7 @@ func runBeaconKG(nodes []model.NodeInfo) dkg.DKGData {
if err != nil {
log.Fatal().Err(err).Msg("failed to write json")
}
log.Info().Msgf("wrote file %s/%s", flagOutdir, model.PathRootDKGData)

return dkgData
}
2 changes: 2 additions & 0 deletions cmd/bootstrap/cmd/final_list.go
Original file line number Diff line number Diff line change
Expand Up @@ -2,6 +2,7 @@ package cmd

import (
"fmt"

"github.com/spf13/cobra"

"github.com/onflow/flow-go/cmd"
Expand Down Expand Up @@ -71,6 +72,7 @@ func finalList(cmd *cobra.Command, args []string) {
if err != nil {
log.Fatal().Err(err).Msg("failed to write json")
}
log.Info().Msgf("wrote file %s/%s", flagOutdir, model.PathFinallist)
}

func validateNodes(localNodes []model.NodeInfo, registeredNodes []model.NodeInfo) {
Expand Down
1 change: 1 addition & 0 deletions cmd/bootstrap/cmd/finalize.go
Original file line number Diff line number Diff line change
Expand Up @@ -203,6 +203,7 @@ func finalize(cmd *cobra.Command, args []string) {
if err != nil {
log.Fatal().Err(err).Msg("failed to write json")
}
log.Info().Msgf("wrote file %s/%s", flagOutdir, model.PathRootProtocolStateSnapshot)
log.Info().Msg("")

// read snapshot and verify consistency
Expand Down
2 changes: 0 additions & 2 deletions cmd/bootstrap/cmd/finalize_test.go
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,6 @@ package cmd

import (
"encoding/hex"
"fmt"
"math/rand"
"path/filepath"
"regexp"
Expand Down Expand Up @@ -93,7 +92,6 @@ func TestFinalize_HappyPath(t *testing.T) {
log = log.Hook(hook)

finalize(nil, nil)
fmt.Println(hook.logs.String())
assert.Regexp(t, finalizeHappyPathRegex, hook.logs.String())
hook.logs.Reset()

Expand Down
1 change: 1 addition & 0 deletions cmd/bootstrap/cmd/genconfig.go
Original file line number Diff line number Diff line change
Expand Up @@ -61,6 +61,7 @@ func genconfigCmdRun(_ *cobra.Command, _ []string) {
if err != nil {
log.Fatal().Err(err).Msg("failed to write json")
}
log.Info().Msgf("wrote file %s/%s", flagOutdir, flagConfig)
}

// genconfigCmd represents the genconfig command
Expand Down
9 changes: 7 additions & 2 deletions cmd/bootstrap/cmd/key.go
Original file line number Diff line number Diff line change
Expand Up @@ -2,6 +2,7 @@ package cmd

import (
"fmt"

"github.com/onflow/crypto"
"github.com/spf13/cobra"

Expand Down Expand Up @@ -97,23 +98,26 @@ func keyCmdRun(_ *cobra.Command, _ []string) {
if err != nil {
log.Fatal().Err(err).Msg("failed to write file")
}
log.Info().Msgf("wrote file %v", model.PathNodeID)
log.Info().Msgf("wrote file %s/%s", flagOutdir, model.PathNodeID)

err = common.WriteJSON(fmt.Sprintf(model.PathNodeInfoPriv, nodeInfo.NodeID), flagOutdir, private)
if err != nil {
log.Fatal().Err(err).Msg("failed to write json")
}
log.Info().Msgf("wrote file %s/%s", flagOutdir, model.PathNodeInfoPriv)

err = common.WriteText(fmt.Sprintf(model.PathSecretsEncryptionKey, nodeInfo.NodeID), flagOutdir, secretsDBKey)
if err != nil {
log.Fatal().Err(err).Msg("failed to write file")
}
log.Info().Msgf("wrote file %v", model.PathSecretsEncryptionKey)
log.Info().Msgf("wrote file %s/%s", flagOutdir, model.PathSecretsEncryptionKey)

err = common.WriteJSON(fmt.Sprintf(model.PathNodeInfoPub, nodeInfo.NodeID), flagOutdir, nodeInfo.Public())
if err != nil {
log.Fatal().Err(err).Msg("failed to write json")
}
log.Info().Msgf("wrote file %s/%s", flagOutdir, model.PathNodeInfoPub)

// write machine account info
if role == flow.RoleCollection || role == flow.RoleConsensus {

Expand All @@ -130,6 +134,7 @@ func keyCmdRun(_ *cobra.Command, _ []string) {
if err != nil {
log.Fatal().Err(err).Msg("failed to write json")
}
log.Info().Msgf("wrote file %s/%s", flagOutdir, model.PathNodeMachineAccountPrivateKey)
}
}

Expand Down
1 change: 1 addition & 0 deletions cmd/bootstrap/cmd/keygen.go
Original file line number Diff line number Diff line change
Expand Up @@ -129,4 +129,5 @@ func genNodePubInfo(nodes []model.NodeInfo) {
if err != nil {
log.Fatal().Err(err).Msg("failed to write json")
}
log.Info().Msgf("wrote file %s/%s", flagOutdir, model.PathInternalNodeInfosPub)
}
1 change: 1 addition & 0 deletions cmd/bootstrap/cmd/machine_account.go
Original file line number Diff line number Diff line change
Expand Up @@ -85,6 +85,7 @@ func machineAccountRun(_ *cobra.Command, _ []string) {
if err != nil {
log.Fatal().Err(err).Msg("failed to write json")
}
log.Info().Msgf("wrote file %s/%s", flagOutdir, fmt.Sprintf(model.PathNodeMachineAccountInfoPriv, nodeID))
}

// readMachineAccountPriv reads the machine account private key files in the bootstrap dir
Expand Down
1 change: 1 addition & 0 deletions cmd/bootstrap/cmd/machine_account_key.go
Original file line number Diff line number Diff line change
Expand Up @@ -61,4 +61,5 @@ func machineAccountKeyRun(_ *cobra.Command, _ []string) {
if err != nil {
log.Fatal().Err(err).Msg("failed to write json")
}
log.Info().Msg(fmt.Sprintf("wrote file %s/%s", flagOutdir, machineAccountKeyPath))
}
2 changes: 2 additions & 0 deletions cmd/bootstrap/cmd/partner_infos.go
Original file line number Diff line number Diff line change
Expand Up @@ -207,6 +207,7 @@ func writeNodePubInfoFile(info *bootstrap.NodeInfoPub) {
if err != nil {
log.Fatal().Err(err).Msg("failed to write json")
}
log.Info().Msgf("wrote file %s/%s", flagOutdir, fileOutputPath)
}

// writePartnerWeightsFile writes the partner weights file
Expand All @@ -215,6 +216,7 @@ func writePartnerWeightsFile(partnerWeights common.PartnerWeights) {
if err != nil {
log.Fatal().Err(err).Msg("failed to write json")
}
log.Info().Msgf("wrote file %s/%s", flagOutdir, bootstrap.FileNamePartnerWeights)
}

func printNodeCounts(numOfNodesByType map[flow.Role]int, totalNumOfPartnerNodes, skippedNodes int) {
Expand Down
1 change: 1 addition & 0 deletions cmd/bootstrap/cmd/qc.go
Original file line number Diff line number Diff line change
Expand Up @@ -53,5 +53,6 @@ func constructRootVotes(block *flow.Block, allNodes, internalNodes []bootstrap.N
if err != nil {
log.Fatal().Err(err).Msg("failed to write json")
}
log.Info().Msgf("wrote file %s/%s", flagOutdir, path)
}
}
3 changes: 3 additions & 0 deletions cmd/bootstrap/cmd/rootblock.go
Original file line number Diff line number Diff line change
Expand Up @@ -169,6 +169,7 @@ func rootBlock(cmd *cobra.Command, args []string) {
if err != nil {
log.Fatal().Err(err).Msg("failed to write json")
}
log.Info().Msgf("wrote file %s/%s", flagOutdir, model.PathNodeInfosPub)
log.Info().Msg("")

log.Info().Msg("running DKG for consensus nodes")
Expand Down Expand Up @@ -221,6 +222,7 @@ func rootBlock(cmd *cobra.Command, args []string) {
if err != nil {
log.Fatal().Err(err).Msg("failed to write json")
}
log.Info().Msgf("wrote file %s/%s", flagOutdir, model.PathIntermediaryBootstrappingData)
log.Info().Msg("")

log.Info().Msg("constructing root block")
Expand All @@ -229,6 +231,7 @@ func rootBlock(cmd *cobra.Command, args []string) {
if err != nil {
log.Fatal().Err(err).Msg("failed to write json")
}
log.Info().Msgf("wrote file %s/%s", flagOutdir, model.PathRootBlockData)
log.Info().Msg("")

log.Info().Msg("constructing and writing votes")
Expand Down
1 change: 1 addition & 0 deletions cmd/dynamic_startup.go
Original file line number Diff line number Diff line change
Expand Up @@ -9,6 +9,7 @@ import (
"strings"

"github.com/onflow/crypto"

"github.com/onflow/flow-go/cmd/util/cmd/common"
"github.com/onflow/flow-go/model/bootstrap"
"github.com/onflow/flow-go/model/flow"
Expand Down
19 changes: 10 additions & 9 deletions cmd/util/cmd/common/clusters.go
Original file line number Diff line number Diff line change
Expand Up @@ -3,9 +3,11 @@ package common
import (
"errors"
"fmt"

"github.com/rs/zerolog"

"github.com/onflow/cadence"

"github.com/onflow/flow-go/cmd/bootstrap/run"
"github.com/onflow/flow-go/model/bootstrap"
model "github.com/onflow/flow-go/model/bootstrap"
Expand All @@ -32,7 +34,7 @@ import (
// - log: the logger instance.
// - partnerNodes: identity list of partner nodes.
// - internalNodes: identity list of internal nodes.
// - numCollectionClusters: the number of collectors in each generated cluster.
// - numCollectionClusters: the number of clusters to generate
// Returns:
// - flow.AssignmentList: the generated assignment list.
// - flow.ClusterList: the generate collection cluster list.
Expand Down Expand Up @@ -98,13 +100,12 @@ func ConstructClusterAssignment(log zerolog.Logger, partnerNodes, internalNodes
// ConstructRootQCsForClusters constructs a root QC for each cluster in the list.
// Args:
// - log: the logger instance.
// - clusterList: identity list of partner nodes.
// - nodeInfos: identity list of internal nodes.
// - clusterBlocks: the number of collectors in each generated cluster.
// - clusterList: list of clusters
// - nodeInfos: list of NodeInfos (must contain all internal nodes)
// - clusterBlocks: list of root blocks for each cluster
// Returns:
// - flow.AssignmentList: the generated assignment list.
// - flow.ClusterList: the generate collection cluster list.
// - error: if any error occurs. Any error returned from this function is irrecoverable.
func ConstructRootQCsForClusters(log zerolog.Logger, clusterList flow.ClusterList, nodeInfos []bootstrap.NodeInfo, clusterBlocks []*cluster.Block) []*flow.QuorumCertificate {
AlexHentschel marked this conversation as resolved.
Show resolved Hide resolved

if len(clusterBlocks) != len(clusterList) {
Expand All @@ -126,21 +127,21 @@ func ConstructRootQCsForClusters(log zerolog.Logger, clusterList flow.ClusterLis
return qcs
}

// ConvertClusterAssignmentsCdc converts golang cluster assignments type to cadence array of arrays.
// ConvertClusterAssignmentsCdc converts golang cluster assignments type to Cadence type `[[String]]`.
func ConvertClusterAssignmentsCdc(assignments flow.AssignmentList) cadence.Array {
assignmentsCdc := make([]cadence.Value, len(assignments))
for i, asmt := range assignments {
vals := make([]cadence.Value, asmt.Len())
for j, k := range asmt {
vals[j] = cadence.String(k.String())
for j, nodeID := range asmt {
vals[j] = cadence.String(nodeID.String())
}
assignmentsCdc[i] = cadence.NewArray(vals).WithType(cadence.NewVariableSizedArrayType(cadence.StringType{}))
}

return cadence.NewArray(assignmentsCdc).WithType(cadence.NewVariableSizedArrayType(cadence.NewVariableSizedArrayType(cadence.StringType{})))
}

// ConvertClusterQcsCdc converts golang cluster qcs type to cadence struct.
// ConvertClusterQcsCdc converts cluster QCs from `QuorumCertificate` type to `ClusterQCVoteData` type.
func ConvertClusterQcsCdc(qcs []*flow.QuorumCertificate, clusterList flow.ClusterList) ([]*flow.ClusterQCVoteData, error) {
voteData := make([]*flow.ClusterQCVoteData, len(qcs))
for i, qc := range qcs {
Expand Down
3 changes: 2 additions & 1 deletion cmd/util/cmd/common/utils.go
Original file line number Diff line number Diff line change
Expand Up @@ -8,10 +8,11 @@ import (
"path/filepath"
"strconv"

"github.com/multiformats/go-multiaddr"
"github.com/rs/zerolog"

"github.com/multiformats/go-multiaddr"
"github.com/onflow/crypto"

"github.com/onflow/flow-go/model/bootstrap"
"github.com/onflow/flow-go/model/encodable"
"github.com/onflow/flow-go/model/flow"
Expand Down
54 changes: 44 additions & 10 deletions cmd/util/cmd/epochs/cmd/recover.go
Original file line number Diff line number Diff line change
Expand Up @@ -3,9 +3,11 @@ package cmd
import (
"context"
"fmt"

"github.com/spf13/cobra"

"github.com/onflow/cadence"

"github.com/onflow/flow-go/cmd/bootstrap/run"
"github.com/onflow/flow-go/cmd/util/cmd/common"
epochcmdutil "github.com/onflow/flow-go/cmd/util/cmd/epochs/utils"
Expand All @@ -19,12 +21,21 @@ import (
// The full epoch data must be generated manually and submitted with this transaction in order for an
// EpochRecover event to be emitted. This command retrieves the current protocol state identities, computes the cluster assignment using those
// identities, generates the cluster QC's and retrieves the DKG key vector of the last successful epoch.
kc1116 marked this conversation as resolved.
Show resolved Hide resolved
kc1116 marked this conversation as resolved.
Show resolved Hide resolved
// This recovery process has some constraints:
// - The RecoveryEpoch must have exactly the same consensus committee as participated in the most recent successful DKG.
// - The RecoveryEpoch must contain enough "internal" collection nodes so that all clusters contain a supermajority of "internal" collection nodes (same constraint as sporks)
var (
generateRecoverEpochTxArgsCmd = &cobra.Command{
Use: "efm-recover-tx-args",
Short: "Generates recover epoch transaction arguments",
Long: "Generates transaction arguments for the epoch recovery transaction.",
Run: generateRecoverEpochTxArgs(getSnapshot),
Long: `
Generates transaction arguments for the epoch recovery transaction.
The epoch recovery transaction is used to recover from any failure in the epoch transition process without requiring a spork.
This recovery process has some constraints:
- The RecoveryEpoch must have exactly the same consensus committee as participated in the most recent successful DKG.
- The RecoveryEpoch must contain enough "internal" collection nodes so that all clusters contain a supermajority of "internal" collection nodes (same constraint as sporks)
`,
Run: generateRecoverEpochTxArgs(getSnapshot),
}

flagAnAddress string
Expand All @@ -39,20 +50,41 @@ var (

func init() {
rootCmd.AddCommand(generateRecoverEpochTxArgsCmd)
addGenerateRecoverEpochTxArgsCmdFlags()
err := addGenerateRecoverEpochTxArgsCmdFlags()
if err != nil {
panic(err)
}
}

func addGenerateRecoverEpochTxArgsCmdFlags() {
func addGenerateRecoverEpochTxArgsCmdFlags() error {
generateRecoverEpochTxArgsCmd.Flags().IntVar(&flagCollectionClusters, "collection-clusters", 3,
"number of collection clusters")
// required parameters for network configuration and generation of root node identities
generateRecoverEpochTxArgsCmd.Flags().StringVar(&flagNodeConfigJson, "node-config", "",
generateRecoverEpochTxArgsCmd.Flags().StringVar(&flagNodeConfigJson, "config", "",
"path to a JSON file containing multiple node configurations (fields Role, Address, Weight)")
generateRecoverEpochTxArgsCmd.Flags().StringVar(&flagInternalNodePrivInfoDir, "internal-priv-dir", "", "path to directory "+
"containing the output from the `keygen` command for internal nodes")
generateRecoverEpochTxArgsCmd.Flags().Uint64Var(&flagNumViewsInEpoch, "epoch-length", 4000, "length of each epoch measured in views")
generateRecoverEpochTxArgsCmd.Flags().Uint64Var(&flagNumViewsInStakingAuction, "epoch-staking-phase-length", 100, "length of the epoch staking phase measured in views")
generateRecoverEpochTxArgsCmd.Flags().Uint64Var(&flagEpochCounter, "epoch-counter", 0, "the epoch counter used to generate the root cluster block")

err := generateRecoverEpochTxArgsCmd.MarkFlagRequired("epoch-length")
if err != nil {
return fmt.Errorf("failed to mark epoch-length flag as required")
}
err = generateRecoverEpochTxArgsCmd.MarkFlagRequired("epoch-staking-phase-length")
if err != nil {
return fmt.Errorf("failed to mark epoch-staking-phase-length flag as required")
}
err = generateRecoverEpochTxArgsCmd.MarkFlagRequired("epoch-counter")
if err != nil {
return fmt.Errorf("failed to mark epoch-counter flag as required")
}
err = generateRecoverEpochTxArgsCmd.MarkFlagRequired("collection-clusters")
if err != nil {
return fmt.Errorf("failed to mark collection-clusters flag as required")
}
return nil
}

func getSnapshot() *inmem.Snapshot {
Expand Down Expand Up @@ -97,17 +129,17 @@ func generateRecoverEpochTxArgs(getSnapshot func() *inmem.Snapshot) func(cmd *co
}
}

// extractResetEpochArgs extracts the required transaction arguments for the `resetEpoch` transaction
// extractRecoverEpochArgs extracts the required transaction arguments for the `recoverEpoch` transaction.
func extractRecoverEpochArgs(snapshot *inmem.Snapshot) []cadence.Value {
epoch := snapshot.Epochs().Current()

ids, err := snapshot.Identities(filter.IsValidProtocolParticipant)
currentEpochIdentities, err := snapshot.Identities(filter.IsValidProtocolParticipant)
if err != nil {
log.Fatal().Err(err).Msg("failed to get valid protocol participants from snapshot")
}

// separate collector nodes by internal and partner nodes
collectors := ids.Filter(filter.HasRole[flow.Identity](flow.RoleCollection))
collectors := currentEpochIdentities.Filter(filter.HasRole[flow.Identity](flow.RoleCollection))
internalCollectors := make(flow.IdentityList, 0)
partnerCollectors := make(flow.IdentityList, 0)

Expand All @@ -119,7 +151,7 @@ func extractRecoverEpochArgs(snapshot *inmem.Snapshot) []cadence.Value {

internalNodesMap := make(map[flow.Identifier]struct{})
for _, node := range internalNodes {
if !ids.Exists(node.Identity()) {
if !currentEpochIdentities.Exists(node.Identity()) {
log.Fatal().Msg(fmt.Sprintf("node ID found in internal node infos missing from protocol snapshot identities: %s", node.NodeID))
}
internalNodesMap[node.NodeID] = struct{}{}
Expand Down Expand Up @@ -158,12 +190,14 @@ func extractRecoverEpochArgs(snapshot *inmem.Snapshot) []cadence.Value {
dkgPubKeys := make([]cadence.Value, 0)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The first element here will need to be the group public key (see https://github.com/onflow/flow-go/blob/master/model/convert/service_event.go#L360-L362).

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nodeIds := make([]cadence.Value, 0)

// NOTE: The RecoveryEpoch will re-use the last successful DKG output. This means that the consensus
// committee in the RecoveryEpoch must be identical to the committee which participated in that DKG.
dkgGroupKeyCdc, cdcErr := cadence.NewString(currentEpochDKG.GroupKey().String())
if cdcErr != nil {
log.Fatal().Err(cdcErr).Msg("failed to get dkg group key cadence string")
}
dkgPubKeys = append(dkgPubKeys, dkgGroupKeyCdc)
for _, id := range ids {
for _, id := range currentEpochIdentities {
Copy link
Member

@jordanschalm jordanschalm May 2, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sorry to comment on a closed PR. I noticed some sanity checks we should add here while reviewing onflow/flow-core-contracts#420.

We should check that currentEpochDKG.Size() == len(currentEpochIdentities.Filter(filter.HasRole(flow.RoleConsensus)))

We already check that there is a DKG key for every consensus node, but not that there is a consensus node for every DKG key.

Added a reminder to the design doc for this.

if id.GetRole() == flow.RoleConsensus {
dkgPubKey, keyShareErr := currentEpochDKG.KeyShare(id.GetNodeID())
if keyShareErr != nil {
Expand Down
Loading
Loading