-
Notifications
You must be signed in to change notification settings - Fork 1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
refactor next slot cache #12233
refactor next slot cache #12233
Conversation
@@ -147,25 +147,15 @@ func ProcessSlotsUsingNextSlotCache( | |||
ctx, span := trace.StartSpan(ctx, "core.state.ProcessSlotsUsingNextSlotCache") | |||
defer span.End() | |||
|
|||
// Check whether the parent state has been advanced by 1 slot in next slot cache. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is a very simple function that was overly commented making it less readable. I removed some useless variables and most comments.
s.headLock.RUnlock() | ||
_, err = s.notifyForkchoiceUpdate(ctx, ¬ifyForkchoiceUpdateArg{ | ||
headState: headState, | ||
headRoot: headRoot, | ||
headBlock: headBlock.Block(), | ||
}) | ||
return err | ||
if err != nil { | ||
log.WithError(err).Debug("could not perform late block tasks: failed to update forkchoice with engine") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I wonder if these Debug
verbosity error logs should have higher visibility
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
How does this prevent updates to the cache for non-canonical blocks ? UpdateNextSlotCache
is still called for all blocks processed
headRoot := s.headRoot() | ||
headState := s.headState(ctx) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is there a reason you switched the ordering from
state then root
toroot and then state
?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
no reason, was going to use the cached state as head state, but that ended up being too hard to do in one PR,
I could prevent it, but decided that it's more useful to actually keep them, we may get reorgs that are non-canonical as soon as we insert them if we receive them near the 4 second boundary and they become canonical at 12 seconds. The likelyhood of this happening is higher than the likelihood of the previous state being needed advanced to another slot to pass an epoch transition, so I decided to keep them here. |
This PR refactors the next slot cache to keep track of two alternative states. This makes the cache safe under skipped slots and 1-slot reorgs, including across epoch boundaries. Design is in https://www.notion.so/arbitrum/Next-Slot-Cache-2-0-301e3788c1b248e6a587bd3505c4034a
This also fixes some bugs making the cache not thread safe (by copying the root slice instead of keeping the slice parameter in the cache).