Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[POC] Comment out backwards sync for newPayload #5666

Closed
wants to merge 1 commit into from

Conversation

siladu
Copy link
Contributor

@siladu siladu commented Jul 4, 2023

PR description

Running a quick feasibility test based on #5411 (comment)

Fixed Issue(s)

Signed-off-by: Simon Dudley <simon.dudley@consensys.net>
@github-actions
Copy link

github-actions bot commented Jul 4, 2023

  • I thought about documentation and added the doc-change-required label to this PR if updates are required.
  • I thought about the changelog and included a changelog update if required.
  • If my PR includes database changes (e.g. KeyValueSegmentIdentifier) I have thought about compatibility and performed forwards and backwards compatibility tests

@siladu
Copy link
Contributor Author

siladu commented Jul 5, 2023

This is a significant improvement for nimbus nodes.

I tried two besu restarts and it recovered with a brief backwards sync.
Also recovered well after a 2 hour besu downtime, logs here:

{"@timestamp":"2023-07-05T01:31:38,206","level":"INFO","thread":"vert.x-worker-thread-0","class":"BackwardSyncContext","message":"Starting a new backward sync session","throwable":""}
{"@timestamp":"2023-07-05T01:31:39,241","level":"INFO","thread":"nioEventLoopGroup-3-5","class":"BackwardSyncStep","message":"Backward sync phase 1 of 2, 35.52% completed, downloaded 200 headers of at least 563. Peers: 3","throwable":""}
{"@timestamp":"2023-07-05T01:31:39,795","level":"INFO","thread":"nioEventLoopGroup-3-5","class":"BackwardSyncStep","message":"Backward sync phase 1 of 2 completed, downloaded a total of 600 headers. Peers: 3","throwable":""}
{"@timestamp":"2023-07-05T01:31:49,328","level":"INFO","thread":"nioEventLoopGroup-3-5","class":"BackwardSyncContext","message":"Backward sync phase 2 of 2, 5.15% completed, imported 29 blocks of at least 563 (current head 3826139, target head 3826673). Peers: 3","throwable":""}
{"@timestamp":"2023-07-05T01:31:59,399","level":"INFO","thread":"nioEventLoopGroup-3-5","class":"BackwardSyncContext","message":"Backward sync phase 2 of 2, 13.83% completed, imported 78 blocks of at least 564 (current head 3826188, target head 3826674). Peers: 3","throwable":""}
{"@timestamp":"2023-07-05T01:32:09,468","level":"INFO","thread":"nioEventLoopGroup-3-5","class":"BackwardSyncContext","message":"Backward sync phase 2 of 2, 22.30% completed, imported 126 blocks of at least 565 (current head 3826236, target head 3826675). Peers: 3","throwable":""}
{"@timestamp":"2023-07-05T01:32:19,516","level":"INFO","thread":"nioEventLoopGroup-3-5","class":"BackwardSyncContext","message":"Backward sync phase 2 of 2, 31.80% completed, imported 180 blocks of at least 566 (current head 3826290, target head 3826676). Peers: 4","throwable":""}
{"@timestamp":"2023-07-05T01:32:29,563","level":"INFO","thread":"nioEventLoopGroup-3-5","class":"BackwardSyncContext","message":"Backward sync phase 2 of 2, 39.15% completed, imported 222 blocks of at least 567 (current head 3826332, target head 3826677). Peers: 4","throwable":""}
{"@timestamp":"2023-07-05T01:32:39,602","level":"INFO","thread":"nioEventLoopGroup-3-5","class":"BackwardSyncContext","message":"Backward sync phase 2 of 2, 47.97% completed, imported 272 blocks of at least 567 (current head 3826382, target head 3826677). Peers: 3","throwable":""}
{"@timestamp":"2023-07-05T01:32:49,648","level":"INFO","thread":"nioEventLoopGroup-3-5","class":"BackwardSyncContext","message":"Backward sync phase 2 of 2, 54.67% completed, imported 310 blocks of at least 567 (current head 3826420, target head 3826677). Peers: 3","throwable":""}
{"@timestamp":"2023-07-05T01:33:01,335","level":"INFO","thread":"nioEventLoopGroup-3-5","class":"BackwardSyncContext","message":"Backward sync phase 2 of 2, 64.96% completed, imported 369 blocks of at least 568 (current head 3826479, target head 3826678). Peers: 4","throwable":""}
{"@timestamp":"2023-07-05T01:33:12,082","level":"INFO","thread":"nioEventLoopGroup-3-5","class":"BackwardSyncContext","message":"Backward sync phase 2 of 2, 76.27% completed, imported 434 blocks of at least 569 (current head 3826544, target head 3826679). Peers: 3","throwable":""}
{"@timestamp":"2023-07-05T01:33:23,700","level":"INFO","thread":"nioEventLoopGroup-3-5","class":"BackwardSyncContext","message":"Backward sync phase 2 of 2, 87.37% completed, imported 498 blocks of at least 570 (current head 3826608, target head 3826680). Peers: 3","throwable":""}
{"@timestamp":"2023-07-05T01:33:33,897","level":"INFO","thread":"nioEventLoopGroup-3-5","class":"BackwardSyncContext","message":"Backward sync phase 2 of 2, 96.50% completed, imported 551 blocks of at least 571 (current head 3826661, target head 3826681). Peers: 4","throwable":""}
{"@timestamp":"2023-07-05T01:33:37,221","level":"INFO","thread":"nioEventLoopGroup-3-5","class":"BackwardSyncContext","message":"Backward sync phase 2 of 2 completed, imported a total of 571 blocks. Peers: 4","throwable":""}
{"@timestamp":"2023-07-05T01:33:38,539","level":"INFO","thread":"ForkJoinPool.commonPool-worker-3","class":"BackwardSyncContext","message":"Backward sync phase 2 of 2 completed, imported a total of 572 blocks. Peers: 4","throwable":""}
{"@timestamp":"2023-07-05T01:33:38,540","level":"INFO","thread":"ForkJoinPool.commonPool-worker-3","class":"BackwardSyncAlgorithm","message":"Current backward sync session is done","throwable":""}
{"@timestamp":"2023-07-05T01:33:48,453","level":"INFO","thread":"vert.x-worker-thread-0","class":"AbstractEngineNewPayload","message":"Imported #3,826,683 / 38 tx / 16 ws / base fee 8 wei / 6,268,529 (20.9%) gas / (0xa1169f1322f41164284db505e31633d34d3c570d3edb4fc4d7647f89370f3b00) in 0.241s. Peers: 4","throwable":""}
{"@timestamp":"2023-07-05T01:33:48,556","level":"INFO","thread":"vert.x-worker-thread-0","class":"AbstractEngineForkchoiceUpdated","message":"VALID for fork-choice-update: head: 0xa1169f1322f41164284db505e31633d34d3c570d3edb4fc4d7647f89370f3b00, finalized: 0x7343be517af13ce4066bac75148e2b87ab44c1257ebdeb128b5f5383f15268f8, safeBlockHash: 0x733b218c2f0dc1a58c2288d397fe3f0cc3c71820425bb3b57b70ba6498df0a78","throwable":""}
{"@timestamp":"2023-07-05T01:34:00,685","level":"INFO","thread":"vert.x-worker-thread-0","class":"AbstractEngineNewPayload","message":"Imported #3,826,684 / 33 tx / 16 ws / base fee 8 wei / 8,642,079 (28.8%) gas / (0xd0b4b7d73c3c51bbbf9d594e04977c81164b86c7f85305d22c6734b3cbc00117) in 0.259s. Peers: 3","throwable":""}
{"@timestamp":"2023-07-05T01:34:12,267","level":"INFO","thread":"vert.x-worker-thread-0","class":"AbstractEngineNewPayload","message":"Imported #3,826,685 / 25 tx / 16 ws / base fee 8 wei / 3,862,129 (12.9%) gas / (0xd54103b7181955f9dcc923ef71aaad8660233cd0b4205118380a90c4922d2a20) in 0.121s. Peers: 3","throwable":""}

@siladu
Copy link
Contributor Author

siladu commented Jul 5, 2023

I also tested with teku and it worked fine, i.e. got into sync which uses backwards sync. Restarting either or both clients doesn't really exercise this PR since it uses in-order lock-step rather than backwards sync.

@siladu
Copy link
Contributor Author

siladu commented Jul 5, 2023

Tested nimbus and teku with both clients offline for 1.5 hours, both working.
Teku uses lock-step, nimbus uses backwards sync and has a slight edge in how quickly it got back in sync.

@jflo jflo requested a review from garyschulte July 5, 2023 17:13
Comment on lines +223 to +226
// LOG.atDebug()
// .setMessage("Parent of block {} is not present, append it to backward sync")
// .addArgument(block::toLogString)
// .log();
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

IMO we should still log at debug, just change the message to indicate we have dropped it

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I won't be merging this PR. If we want to progress this, we should create an issue and prioritise it.

I think we should take a more considered approach to ensure we align with the spec.
Also, there's more code we can rip out/refactor if we remove this code path.

@siladu siladu changed the title Comment out backwards sync for newPayload [POC] Comment out backwards sync for newPayload Jul 6, 2023
@garyschulte
Copy link
Contributor

closing, we will revisit #5411 as needed

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants