Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Core time: Core Management (request_core_count) #2211

Closed
6 tasks done
eskimor opened this issue Nov 7, 2023 · 4 comments
Closed
6 tasks done

Core time: Core Management (request_core_count) #2211

eskimor opened this issue Nov 7, 2023 · 4 comments
Assignees

Comments

@eskimor
Copy link
Member

eskimor commented Nov 7, 2023

The core time chain is managing cores for the relay chain. We have four variables to consider:

  1. Total number of cores derived from (bulk + on-demand + legacy auction)
  2. Number of legacy cores
  3. Number of bulk cores
  4. Number of on-demand cores

Any bulk core can become an on-demand core. By just placing an indefinite (no end_hint) pool assignment for a core. Thus it makes sense to unify those two. We will have a number of bulk cores and how many cores of those should be on-demand can be decided by the core time chain, by simply sending appropriate assignments. Reducing above list to:

  1. Total number of cores derived from (bulk + on-demand + legacy auction)
  2. Number of legacy cores
  3. Number of bulk cores

Restrictions

The total number of cores determines things like backing groups and is not supposed to change within a session. The relay chain needs to stay in control on when a change in the total number of cores happens.

Current State

Currently there is no bulk, just on-demand and legacy. The amount of cores for each are managed via a relay chain configuration.

Desired State

Legacy as getting phased out anyway, will stay being managed by the relay chain. Thus we will have two types of cores: bulk + legacy. The number of legacy cores is transparent to the core time chain.

E.g. if we have 40 legacy cores, then bulk cores will start at core index 40 from the perspective of the relay chain. The core time chain does not need to be concerned with this at all: If there are let's say 10 bulk cores to start with, they will be indexed [0..9] from the perspective of the core time chain. The relay chain, will do the offset calculation and will add the number of legacy cores, to get the "correct" core number for assignments as received from the core time chain.

On-demand/Instantaneous

We would suggest that the core time chain has a configuration setting the number of desired on-demand cores (the old configuration in the relay chain will thus be removed). With this the core time chain will issue a normal assign_core message, whenever that configuration changes. This allows for maximum flexibility: E.g. eventually this might not be a configuration, but determined automatically based on demand for normal bulk cores.

Total number of (bulk) cores

As explained above, the total number of cores is only allowed to change at session boundaries. With this restriction, the interface as described in RFC-5:

fn request_core_count(
    count: u16,
)

and

fn notify_core_count(
    count: u16,
)

works, but the dependency on session buffering is hidden. It is important to realize that the response notify_core_count might take the relay chain up to almost two sessions send back that message. The core time chain should be able to handle this long delay gracefully.

Worth to mention, that the core count we are talking here is the number of bulk cores (including instantaneous as designated via assignments).

Due to the asynchronicity of message passing, on change the number of cores available as seen by the core time chain and by the relay chain will be out of sync for a bit. Consequences should be minor though, if some care is taken on reduction:

Reducing the number of cores

This is the more dangerous change. It is recommended to have native (provided by the system, not by assignment from a buyer) pool/instantaneous cores at the top (highest core numbers). As reduction of those cores will have no negative impact on buyers.

Due to the above mentioned asynchronicity, it is theoretically possible for the relay chain (already operating at the reduced core count) to drop assignments it receives from the core time chain (still operating at the larger core count). This should not be a real concern though, as those assignments would have become void shortly afterwards anyway. To avoid negative side effects, it would be recommended to stop selling core assignments for a core long before removing it.

Increasing the number of cores

This is non-problematic. As the relay chain will always have updated its core count before the broker chain. Thus, in this case the core count on the relay chain will always be either equal or larger compared to the core time chain view. Hence there is no risk of sent assignments being invalid due to asynchronicity.

Implementation

  • core time: Have overall bulk core number configuration on the core time chain
  • core time: Send request_core_count messages to the relay chain, whenever that configuration is changed..
  • relay: Buffer requested core count in session buffered configuration. On session change, send notify_core_count back, whenever core count changes. That configuration should either not be possible to change other than via request_count message coming from the core time chain.

Implementation Phase 2:

  • relay: Phase out legacy parachains (reducing core count on the relay chain)
  • core time: Increase bulk core count on the core time chain accordingly
  • relay: Once no legacy chains exist anymore, remove code and configuration. Only existing assignments will be "bulk" now.
@eskimor eskimor converted this from a draft issue Nov 7, 2023
@eskimor eskimor changed the title Core Count Management Core Management Nov 9, 2023
@joepetrowski
Copy link
Contributor

Due to the above mentioned asynchronicity, it is theoretically possible for the relay chain (already operating at the reduced core count) to drop assignments it receives from the core time chain (still operating at the larger core count). This should not be a real concern though, as those assignments would have become void shortly afterwards anyway. To avoid negative side effects, it would be recommended to stop selling core assignments for a core long before removing it.

Similar to how assets get trapped in XCM, perhaps when a core assignment is dropped, the Relay Chain could store (or send a message back to the Coretime chain) with a ticket. The chain that didn't get its block executed could then claim it for a new core assignment.

The price may have changed in the meantime, but this should be a pretty rare occurrence, and the buyer was willing to pay what they previously did.

@BradleyOlson64
Copy link
Contributor

It is important to realize that the response notify_core_count might take the relay chain up to almost two sessions send back that message.

I see why it would take until at least the start of the next session, but why nearly two sessions?

@eskimor
Copy link
Member Author

eskimor commented Nov 14, 2023

The next session must be known in the previous session already for determinism. E.g. imagine that the parameters of the next session would be free to change until the very last block of the previous session. If then there is a re-org/reversion we might end up with two sessions for a given session index, which actually differ. This is not sound. See also also: #633

@eskimor eskimor changed the title Core Management Core time: Core Management (request_core_count) Nov 15, 2023
@eskimor eskimor moved this from Backlog to In Progress in parachains team board Dec 1, 2023
@eskimor eskimor self-assigned this Dec 5, 2023
@eskimor eskimor moved this from In Progress to Review in progress in parachains team board Dec 21, 2023
@eskimor eskimor moved this from Review in progress to Completed in parachains team board Mar 19, 2024
@eskimor
Copy link
Member Author

eskimor commented Apr 3, 2024

Done.

@eskimor eskimor closed this as completed Apr 3, 2024
bkontur added a commit that referenced this issue Jul 4, 2024
Original PR with more context: paritytech/parity-bridges-common#2211


Signed-off-by: Branislav Kontur <bkontur@gmail.com>
Co-authored-by: Svyatoslav Nikolsky <svyatonik@gmail.com>
bkontur added a commit that referenced this issue Jul 4, 2024
Original PR with more context: paritytech/parity-bridges-common#2211


Signed-off-by: Branislav Kontur <bkontur@gmail.com>
Co-authored-by: Svyatoslav Nikolsky <svyatonik@gmail.com>
bkontur added a commit that referenced this issue Jul 11, 2024
Original PR with more context: paritytech/parity-bridges-common#2211


Signed-off-by: Branislav Kontur <bkontur@gmail.com>
Co-authored-by: Svyatoslav Nikolsky <svyatonik@gmail.com>
bkontur added a commit that referenced this issue Jul 11, 2024
Original PR with more context: paritytech/parity-bridges-common#2211


Signed-off-by: Branislav Kontur <bkontur@gmail.com>
Co-authored-by: Svyatoslav Nikolsky <svyatonik@gmail.com>
bkontur added a commit that referenced this issue Jul 17, 2024
Original PR with more context: paritytech/parity-bridges-common#2211


Signed-off-by: Branislav Kontur <bkontur@gmail.com>
Co-authored-by: Svyatoslav Nikolsky <svyatonik@gmail.com>
bkontur added a commit that referenced this issue Jul 23, 2024
Original PR with more context: paritytech/parity-bridges-common#2211


Signed-off-by: Branislav Kontur <bkontur@gmail.com>
Co-authored-by: Svyatoslav Nikolsky <svyatonik@gmail.com>
bkontur added a commit that referenced this issue Jul 26, 2024
Original PR with more context: paritytech/parity-bridges-common#2211


Signed-off-by: Branislav Kontur <bkontur@gmail.com>
Co-authored-by: Svyatoslav Nikolsky <svyatonik@gmail.com>
bkontur added a commit that referenced this issue Jul 27, 2024
Original PR with more context: paritytech/parity-bridges-common#2211


Signed-off-by: Branislav Kontur <bkontur@gmail.com>
Co-authored-by: Svyatoslav Nikolsky <svyatonik@gmail.com>
bkontur added a commit that referenced this issue Jul 29, 2024
Original PR with more context: paritytech/parity-bridges-common#2211


Signed-off-by: Branislav Kontur <bkontur@gmail.com>
Co-authored-by: Svyatoslav Nikolsky <svyatonik@gmail.com>
bkontur added a commit that referenced this issue Jul 30, 2024
Original PR with more context: paritytech/parity-bridges-common#2211


Signed-off-by: Branislav Kontur <bkontur@gmail.com>
Co-authored-by: Svyatoslav Nikolsky <svyatonik@gmail.com>
bkontur added a commit that referenced this issue Jul 31, 2024
Original PR with more context: paritytech/parity-bridges-common#2211


Signed-off-by: Branislav Kontur <bkontur@gmail.com>
Co-authored-by: Svyatoslav Nikolsky <svyatonik@gmail.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
Status: Completed
Development

No branches or pull requests

3 participants