-
Notifications
You must be signed in to change notification settings - Fork 85
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add migration to fix coretime state (WIP) #458
base: main
Are you sure you want to change the base?
Add migration to fix coretime state (WIP) #458
Conversation
This reverts commit 0367a60.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Left a few comments. In general this patch is relatively hard to reason about. Lot's of magic numbers and implicit state tracking - e.g. with core numbers.
assignment: CoreAssignment::Task(para_id), | ||
}])); | ||
|
||
Workplan::<Runtime>::insert((287565, core_id), workplan_entry); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Confirmed: This is indeed the region_begin.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
TODO: Check on what occasions this is sent to the relay chain.
} | ||
|
||
// Incorrect potential renewals in state | ||
const INCORRECT_RENEWAL_IDS: [(u16, u32); 6] = |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Confirmed: all leases added below (compared to coretime chain state) are to be found here.
(2000, 313920), | ||
(3338, 329040), | ||
(2004, 329040), | ||
(3344, 344160), |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Checked (some samples): Values here are indeed offset amount of blocks larger than values in state.
Leases::<Runtime>::put(leases); | ||
|
||
// This reorders the cores, so the existing entries to the workplan need to be overwritten. | ||
for (&(para_id, _), core_id) in LEASES.iter().zip(51u16..55u16) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This looks wrong. Why only 4 cores?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The first 6 entries in leases are matching the ones in INCORRECT_RENEWAL_IDS
... verified with chain state.
So we are adding work plan entries for those 6 entries which now are leases again and not yet up for renwal.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Chain state for para 2094 was: same region (287565), but core 5. This adds another entry for that para on core in the range 51..55.
The comment above suggests that some entries are overridden, but I don't see how.
- There is no entry for cores 51 to 55 right now.
- Adding another entry for the same para on a different core, will not override the assignment for the para, but add another one. So now we have the same para scheduled on two cores.
In general, this code would do good being separated into functions, each taking care of one item in the hackmd.
// entirely. | ||
for (&(para_id, _), core_id) in POTENTIAL_RENEWALS | ||
.iter() | ||
.zip((sale_info.first_core + sale_info.cores_offered)..POTENTIAL_RENEWALS.len() as u16) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This looks like an empty range? 56..5?
Workplan::<Runtime>::insert((292605, 61), pool_assignment.clone()); | ||
|
||
// Add the system pool core to the workplan starting now. | ||
let now = 287000; // TODO - this needs to be after the cores have just added have been processed, which takes |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Whether we really need to account for the two sessions. If we do, we can fetch the current relay chain block number and add two sessions worth of blocks.
|
||
// Add the system pool core to the workplan for the next cycle (287565) on the last new core | ||
// (core 61) | ||
Workplan::<Runtime>::insert((292605, 61), pool_assignment.clone()); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Where is this number coming from?
|
||
// Incorrect potential renewals in state | ||
const INCORRECT_RENEWAL_IDS: [(u16, u32); 6] = | ||
[(6, 292605), (5, 292605), (13, 292605), (15, 292605), (47, 292605), (44, 292605)]; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Checked with chain state: Matching. (This is the entire list of currently registered potential renewals, as expected.)
} | ||
|
||
// Remove the existing 6 PotentialRenewals. | ||
for &(core, when) in INCORRECT_RENEWAL_IDS.iter() { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks good.
WIP fix to the issues with the Coretime state
Notes and plan here: https://hackmd.io/kurv6rTXTF6yvaUJ9uSNoQ