-
Notifications
You must be signed in to change notification settings - Fork 699
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Expose ClaimQueue
via a runtime api and use it in collation-generation
#3580
Changes from all commits
fe93f72
94b23d9
5417f25
5245042
0d32896
50cbe5b
c96afbd
394eaa9
b5bb2ab
ba40962
d42b310
fa3617c
a39af36
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
Some generated files are not rendered by default. Learn more about how customized files appear on GitHub.
Original file line number | Diff line number | Diff line change |
---|---|---|
|
@@ -38,21 +38,25 @@ use polkadot_node_primitives::{ | |
SubmitCollationParams, | ||
}; | ||
use polkadot_node_subsystem::{ | ||
messages::{CollationGenerationMessage, CollatorProtocolMessage}, | ||
messages::{CollationGenerationMessage, CollatorProtocolMessage, RuntimeApiRequest}, | ||
overseer, ActiveLeavesUpdate, FromOrchestra, OverseerSignal, RuntimeApiError, SpawnedSubsystem, | ||
SubsystemContext, SubsystemError, SubsystemResult, | ||
}; | ||
use polkadot_node_subsystem_util::{ | ||
request_async_backing_params, request_availability_cores, request_persisted_validation_data, | ||
request_validation_code, request_validation_code_hash, request_validators, | ||
has_required_runtime, request_async_backing_params, request_availability_cores, | ||
request_claim_queue, request_persisted_validation_data, request_validation_code, | ||
request_validation_code_hash, request_validators, | ||
}; | ||
use polkadot_primitives::{ | ||
collator_signature_payload, CandidateCommitments, CandidateDescriptor, CandidateReceipt, | ||
CollatorPair, CoreState, Hash, Id as ParaId, OccupiedCoreAssumption, PersistedValidationData, | ||
ValidationCodeHash, | ||
CollatorPair, CoreIndex, CoreState, Hash, Id as ParaId, OccupiedCoreAssumption, | ||
PersistedValidationData, ScheduledCore, ValidationCodeHash, | ||
}; | ||
use sp_core::crypto::Pair; | ||
use std::sync::Arc; | ||
use std::{ | ||
collections::{BTreeMap, VecDeque}, | ||
sync::Arc, | ||
}; | ||
|
||
mod error; | ||
|
||
|
@@ -223,6 +227,7 @@ async fn handle_new_activations<Context>( | |
let availability_cores = availability_cores??; | ||
let n_validators = validators??.len(); | ||
let async_backing_params = async_backing_params?.ok(); | ||
let maybe_claim_queue = fetch_claim_queue(ctx.sender(), relay_parent).await?; | ||
|
||
for (core_idx, core) in availability_cores.into_iter().enumerate() { | ||
let _availability_core_timer = metrics.time_new_activations_availability_core(); | ||
|
@@ -239,10 +244,25 @@ async fn handle_new_activations<Context>( | |
// TODO [now]: this assumes that next up == current. | ||
// in practice we should only set `OccupiedCoreAssumption::Included` | ||
// when the candidate occupying the core is also of the same para. | ||
if let Some(scheduled) = occupied_core.next_up_on_available { | ||
(scheduled, OccupiedCoreAssumption::Included) | ||
} else { | ||
continue | ||
let res = match maybe_claim_queue { | ||
Some(ref claim_queue) => { | ||
// read what's in the claim queue for this core | ||
fetch_next_scheduled_on_core( | ||
claim_queue, | ||
CoreIndex(core_idx as u32), | ||
) | ||
}, | ||
None => { | ||
// Runtime doesn't support claim queue runtime api. Fallback to | ||
// `next_up_on_available` | ||
occupied_core.next_up_on_available | ||
alindima marked this conversation as resolved.
Show resolved
Hide resolved
|
||
}, | ||
} | ||
.map(|scheduled| (scheduled, OccupiedCoreAssumption::Included)); | ||
|
||
match res { | ||
Some(res) => res, | ||
None => continue, | ||
} | ||
}, | ||
_ => { | ||
|
@@ -600,3 +620,37 @@ fn erasure_root( | |
let chunks = polkadot_erasure_coding::obtain_chunks_v1(n_validators, &available_data)?; | ||
Ok(polkadot_erasure_coding::branches(&chunks).root()) | ||
} | ||
|
||
// Checks if the runtime supports `request_claim_queue` and executes it. Returns `Ok(None)` | ||
// otherwise. Any [`RuntimeApiError`]s are bubbled up to the caller. | ||
async fn fetch_claim_queue( | ||
sender: &mut impl overseer::CollationGenerationSenderTrait, | ||
relay_parent: Hash, | ||
) -> crate::error::Result<Option<BTreeMap<CoreIndex, VecDeque<ParaId>>>> { | ||
if has_required_runtime( | ||
sender, | ||
relay_parent, | ||
RuntimeApiRequest::CLAIM_QUEUE_RUNTIME_REQUIREMENT, | ||
) | ||
.await | ||
Comment on lines
+630
to
+635
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. nit: instead of doing two requests (one for the version and another for the actual runtime API), we could directly try calling the runtime API and handle There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. I see your point but I think it is more or less the same. Each runtime call has got a version check before it (done by the runtime api subsystem). So in the current case we have got: If we call the new api directly on an old runtime and handle the error we will have the same: So in terms of runtime calls - it's the same when we have got old runtime. For a new runtime we do save one version check but (1) we will remove the version check once everything is released on Polkadot and (2) the runtime calls are cached and never hit the runtime. Considering these - I'd stick with the current version because it is more explicit. Another drawback of relying on There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more.
yes, that's exactly my point, saving one version check for a new runtime. We'd also save one round-trip to the runtime API subsystem, regardless of the runtime version. anyway, I don't have very strong feelings about this There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Your point is valid. I've created an issue for this - #3756 |
||
{ | ||
let res = request_claim_queue(relay_parent, sender).await.await??; | ||
Ok(Some(res)) | ||
} else { | ||
gum::trace!(target: LOG_TARGET, "Runtime doesn't support `request_claim_queue`"); | ||
Ok(None) | ||
} | ||
} | ||
|
||
// Returns the next scheduled `ParaId` for a core in the claim queue, wrapped in `ScheduledCore`. | ||
// This function is supposed to be used in `handle_new_activations` hence the return type. | ||
fn fetch_next_scheduled_on_core( | ||
claim_queue: &BTreeMap<CoreIndex, VecDeque<ParaId>>, | ||
core_idx: CoreIndex, | ||
) -> Option<ScheduledCore> { | ||
claim_queue | ||
.get(&core_idx)? | ||
.front() | ||
.cloned() | ||
.map(|para_id| ScheduledCore { para_id, collator: None }) | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think this TODO is being solved by this PR, isn't it? There's also a tracking issue here: #3327
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I guess you are right. But it is fixed thanks to the updated implementation of
next_up_on_available
?Or I am missing something?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
oh, I think I was wrong actually. I think it should be fixed while implementing #3582