Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Parachains-Aura: Only produce once per slot #3308

Merged
merged 4 commits into from
Feb 13, 2024
Merged
Show file tree
Hide file tree
Changes from 2 commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
8 changes: 7 additions & 1 deletion cumulus/client/consensus/aura/src/collator.rs
Original file line number Diff line number Diff line change
Expand Up @@ -258,6 +258,7 @@ where
pub struct SlotClaim<Pub> {
author_pub: Pub,
pre_digest: DigestItem,
slot: Slot,
timestamp: Timestamp,
}

Expand All @@ -272,7 +273,7 @@ impl<Pub> SlotClaim<Pub> {
P::Public: Codec,
P::Signature: Codec,
{
SlotClaim { author_pub, timestamp, pre_digest: aura_internal::pre_digest::<P>(slot) }
SlotClaim { author_pub, timestamp, pre_digest: aura_internal::pre_digest::<P>(slot), slot }
}

/// Get the author's public key.
Expand All @@ -285,6 +286,11 @@ impl<Pub> SlotClaim<Pub> {
&self.pre_digest
}

/// Get the slot assigned to this claim.
pub fn slot(&self) -> Slot {
self.slot
}

/// Get the timestamp corresponding to the relay-chain slot this claim was
/// generated against.
pub fn timestamp(&self) -> Timestamp {
Expand Down
16 changes: 16 additions & 0 deletions cumulus/client/consensus/aura/src/collators/basic.rs
Original file line number Diff line number Diff line change
Expand Up @@ -141,6 +141,8 @@ where
collator_util::Collator::<Block, P, _, _, _, _, _>::new(params)
};

let mut last_processed_slot = 0;

while let Some(request) = collation_requests.next().await {
macro_rules! reject_with_error {
($err:expr) => {{
Expand Down Expand Up @@ -192,6 +194,20 @@ where
Err(e) => reject_with_error!(e),
};

// With async backing this function will be called every relay chain block.
//
// Most parachains currently run with 12 seconds slots and thus, they would try to
// produce multiple blocks per slot which very likely would fail on chain. Thus, we have
// this "hack" to only produce on block per slot.
//
// With https://github.com/paritytech/polkadot-sdk/issues/3168 this implementation will be
// obsolete and also the underlying issue will be fixed.
if last_processed_slot >= *claim.slot() {
continue
} else {
last_processed_slot = *claim.slot();
Copy link
Contributor

@alexggh alexggh Feb 13, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can we move this assignment at the end of the loop, so that we don't save the slot in case we have errors down the road and let the next block try as well.

Just want to be extra safe and make sure we don't end up in a situation where at first slot pass we don't build the block because runtime doesn't accept it and then at second pass we don't even try anymore.

}

let (parachain_inherent_data, other_inherent_data) = try_request!(
collator
.create_inherent_data(
Expand Down
24 changes: 23 additions & 1 deletion cumulus/client/consensus/aura/src/lib.rs
Original file line number Diff line number Diff line change
Expand Up @@ -42,7 +42,14 @@ use sp_core::crypto::Pair;
use sp_inherents::CreateInherentDataProviders;
use sp_keystore::KeystorePtr;
use sp_runtime::traits::{Block as BlockT, Header as HeaderT, Member, NumberFor};
use std::{convert::TryFrom, marker::PhantomData, sync::Arc};
use std::{
convert::TryFrom,
marker::PhantomData,
sync::{
atomic::{AtomicU64, Ordering},
Arc,
},
};

mod import_queue;

Expand All @@ -61,6 +68,7 @@ pub struct AuraConsensus<B, CIDP, W> {
create_inherent_data_providers: Arc<CIDP>,
aura_worker: Arc<Mutex<W>>,
slot_duration: SlotDuration,
last_slot_processed: Arc<AtomicU64>,
_phantom: PhantomData<B>,
}

Expand All @@ -70,6 +78,7 @@ impl<B, CIDP, W> Clone for AuraConsensus<B, CIDP, W> {
create_inherent_data_providers: self.create_inherent_data_providers.clone(),
aura_worker: self.aura_worker.clone(),
slot_duration: self.slot_duration,
last_slot_processed: self.last_slot_processed.clone(),
_phantom: PhantomData,
}
}
Expand Down Expand Up @@ -156,6 +165,7 @@ where
Box::new(AuraConsensus {
create_inherent_data_providers: Arc::new(create_inherent_data_providers),
aura_worker: Arc::new(Mutex::new(worker)),
last_slot_processed: Default::default(),
slot_duration,
_phantom: PhantomData,
})
Expand Down Expand Up @@ -221,6 +231,18 @@ where
Some((validation_data.max_pov_size / 2) as usize),
);

// With async backing this function will be called every relay chain block.
alexggh marked this conversation as resolved.
Show resolved Hide resolved
//
// Most parachains currently run with 12 seconds slots and thus, they would try to produce
// multiple blocks per slot which very likely would fail on chain. Thus, we have this "hack"
// to only produce on block per slot.
//
// With https://github.com/paritytech/polkadot-sdk/issues/3168 this implementation will be
// obsolete and also the underlying issue will be fixed.
if self.last_slot_processed.fetch_max(*info.slot, Ordering::Relaxed) >= *info.slot {
return None
}

let res = self.aura_worker.lock().await.on_slot(info).await?;

Some(ParachainCandidate { block: res.block, proof: res.storage_proof })
Expand Down
Loading