Skip to content
This repository has been archived by the owner on Nov 15, 2023. It is now read-only.

Clippyfy #6341

Merged
10 commits merged into from
Nov 30, 2022
Merged
Show file tree
Hide file tree
Changes from 6 commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
32 changes: 32 additions & 0 deletions .cargo/config.toml
Original file line number Diff line number Diff line change
@@ -0,0 +1,32 @@
#
# An auto defined `clippy` feature was introduced,
# but it was found to clash with user defined features,
# so was renamed to `cargo-clippy`.
#
Comment on lines +1 to +5
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: Interesting history, but unrelated to our codebase, I think this is unneeded.

# If you want standard clippy run:
# RUSTFLAGS= cargo clippy
[target.'cfg(feature = "cargo-clippy")']
rustflags = [
"-Aclippy::all",
"-Dclippy::correctness",
"-Aclippy::if-same-then-else",
"-Aclippy::clone-double-ref",
"-Dclippy::complexity",
"-Aclippy::zero-prefixed-literal", # 00_1000_000
"-Aclippy::type_complexity", # raison d'etre
"-Aclippy::nonminimal-bool", # maybe
"-Aclippy::borrowed-box", # Reasonable to fix this one
"-Aclippy::too-many-arguments", # (Turning this on would lead to)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
"-Aclippy::too-many-arguments", # (Turning this on would lead to)
"-Aclippy::too-many-arguments", # (Turning this on would lead to too many errors)

"-Aclippy::unnecessary_cast", # Types may change
"-Aclippy::identity-op", # One case where we do 0 +
"-Aclippy::useless_conversion", # Types may change
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks like these lints were copied from Substrate. But it makes me sad that this is allow. We have a bunch of easy simplifications like e.g.

  --> xcm/src/v1/junction.rs:95:66
   |
95 |             Junction0::Plurality { id, part } => Ok(Self::Plurality { id: id.into(), part }),
   |                                                                           ^^^^^^^^^ help: consider removing `.into()`: `id`

This lint can be annoying during development as "Types may change", but in CI it should be more strict IMO. That would keep friction to a minimum.

Probably this should be a follow-up issue where people can chime in. Super low priority in the grand scheme of things, so let's leave the lints as-is for now.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That also really depends on people's workflow. I only run clippy manually when I'm done. I.e. that lint wouldn't bother me at all.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah, I guess you can just have clippy in your pre-push hook instead of pre-commit.

Also I remembered you can #[allow(clippy::lint)] on certain lines. I would prefer that as opposed to white-listing a lint for the whole codebase for "One case where we do 0 +".

"-Aclippy::unit_arg", # styalistic.
"-Aclippy::option-map-unit-fn", # styalistic
"-Aclippy::bind_instead_of_map", # styalistic
Comment on lines +23 to +25
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
"-Aclippy::unit_arg", # styalistic.
"-Aclippy::option-map-unit-fn", # styalistic
"-Aclippy::bind_instead_of_map", # styalistic
"-Aclippy::unit_arg", # stylistic.
"-Aclippy::option-map-unit-fn", # stylistic
"-Aclippy::bind_instead_of_map", # stylistic

"-Aclippy::erasing_op", # E.g. 0 * DOLLARS
"-Aclippy::eq_op", # In tests we test equality.
"-Aclippy::while_immutable_condition", # false positives
"-Aclippy::needless_option_as_deref", # false positives
"-Aclippy::derivable_impls", # false positives
"-Aclippy::stable_sort_primitive", # prefer stable sort
]
1 change: 0 additions & 1 deletion .gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -10,5 +10,4 @@ polkadot.*
!polkadot.service
!.rpm/*
.DS_Store
.cargo
.env
12 changes: 6 additions & 6 deletions cli/src/command.rs
Original file line number Diff line number Diff line change
Expand Up @@ -591,27 +591,27 @@ pub fn run() -> Result<()> {

#[cfg(feature = "kusama-native")]
if chain_spec.is_kusama() {
return Ok(runner.sync_run(|config| {
return runner.sync_run(|config| {
cmd.run::<service::kusama_runtime::Block, service::KusamaExecutorDispatch>(config)
.map_err(|e| Error::SubstrateCli(e))
})?)
})
}

#[cfg(feature = "westend-native")]
if chain_spec.is_westend() {
return Ok(runner.sync_run(|config| {
return runner.sync_run(|config| {
cmd.run::<service::westend_runtime::Block, service::WestendExecutorDispatch>(config)
.map_err(|e| Error::SubstrateCli(e))
})?)
})
}

// else we assume it is polkadot.
#[cfg(feature = "polkadot-native")]
{
return Ok(runner.sync_run(|config| {
return runner.sync_run(|config| {
cmd.run::<service::polkadot_runtime::Block, service::PolkadotExecutorDispatch>(config)
.map_err(|e| Error::SubstrateCli(e))
})?)
})
}

#[cfg(not(feature = "polkadot-native"))]
Expand Down
2 changes: 1 addition & 1 deletion erasure-coding/src/lib.rs
Original file line number Diff line number Diff line change
Expand Up @@ -216,7 +216,7 @@ pub struct Branches<'a, I> {
impl<'a, I: AsRef<[u8]>> Branches<'a, I> {
/// Get the trie root.
pub fn root(&self) -> H256 {
self.root.clone()
self.root
}
}

Expand Down
8 changes: 4 additions & 4 deletions node/client/src/benchmarking.rs
Original file line number Diff line number Diff line change
Expand Up @@ -165,7 +165,7 @@ impl BenchmarkCallSigner<polkadot_runtime::RuntimeCall, sp_core::sr25519::Pair>
(),
runtime::VERSION.spec_version,
runtime::VERSION.transaction_version,
genesis.clone(),
genesis,
genesis,
(),
(),
Expand Down Expand Up @@ -220,7 +220,7 @@ impl BenchmarkCallSigner<westend_runtime::RuntimeCall, sp_core::sr25519::Pair>
(),
runtime::VERSION.spec_version,
runtime::VERSION.transaction_version,
genesis.clone(),
genesis,
genesis,
(),
(),
Expand Down Expand Up @@ -274,7 +274,7 @@ impl BenchmarkCallSigner<kusama_runtime::RuntimeCall, sp_core::sr25519::Pair>
(),
runtime::VERSION.spec_version,
runtime::VERSION.transaction_version,
genesis.clone(),
genesis,
genesis,
(),
(),
Expand Down Expand Up @@ -328,7 +328,7 @@ impl BenchmarkCallSigner<rococo_runtime::RuntimeCall, sp_core::sr25519::Pair>
(),
runtime::VERSION.spec_version,
runtime::VERSION.transaction_version,
genesis.clone(),
genesis,
genesis,
(),
(),
Expand Down
4 changes: 2 additions & 2 deletions node/core/approval-voting/src/approval_checking.rs
Original file line number Diff line number Diff line change
Expand Up @@ -282,8 +282,8 @@ impl State {

/// Constructs an infinite iterator from an array of `TrancheEntry` values. Any missing tranches
/// are filled with empty assignments, as they are needed to compute the approved tranches.
fn filled_tranche_iterator<'a>(
tranches: &'a [TrancheEntry],
fn filled_tranche_iterator(
tranches: &[TrancheEntry],
) -> impl Iterator<Item = (DelayTranche, &[(ValidatorIndex, Tick)])> {
let mut gap_end = None;

Expand Down
8 changes: 4 additions & 4 deletions node/core/approval-voting/src/criteria.rs
Original file line number Diff line number Diff line change
Expand Up @@ -155,10 +155,10 @@ impl<'a> From<&'a SessionInfo> for Config {
Config {
assignment_keys: s.assignment_keys.clone(),
validator_groups: s.validator_groups.clone(),
n_cores: s.n_cores.clone(),
zeroth_delay_tranche_width: s.zeroth_delay_tranche_width.clone(),
relay_vrf_modulo_samples: s.relay_vrf_modulo_samples.clone(),
n_delay_tranches: s.n_delay_tranches.clone(),
n_cores: s.n_cores,
zeroth_delay_tranche_width: s.zeroth_delay_tranche_width,
relay_vrf_modulo_samples: s.relay_vrf_modulo_samples,
n_delay_tranches: s.n_delay_tranches,
}
}
}
Expand Down
7 changes: 2 additions & 5 deletions node/core/approval-voting/src/import.rs
Original file line number Diff line number Diff line change
Expand Up @@ -415,11 +415,8 @@ pub(crate) async fn handle_new_head<Context, B: Backend>(
Err(error) => {
// It's possible that we've lost a race with finality.
let (tx, rx) = oneshot::channel();
ctx.send_message(ChainApiMessage::FinalizedBlockHash(
block_header.number.clone(),
tx,
))
.await;
ctx.send_message(ChainApiMessage::FinalizedBlockHash(block_header.number, tx))
.await;

let lost_to_finality = match rx.await {
Ok(Ok(Some(h))) if h != block_hash => true,
Expand Down
11 changes: 4 additions & 7 deletions node/core/approval-voting/src/lib.rs
Original file line number Diff line number Diff line change
Expand Up @@ -621,10 +621,7 @@ impl CurrentlyCheckingSet {
.candidate_hash_map
.remove(&approval_state.candidate_hash)
.unwrap_or_default();
approvals_cache.put(
approval_state.candidate_hash.clone(),
approval_state.approval_outcome.clone(),
);
approvals_cache.put(approval_state.candidate_hash, approval_state.approval_outcome);
return (out, approval_state)
}
}
Expand Down Expand Up @@ -768,7 +765,7 @@ async fn run<B, Context>(
where
B: Backend,
{
if let Err(err) = db_sanity_check(subsystem.db.clone(), subsystem.db_config.clone()) {
if let Err(err) = db_sanity_check(subsystem.db.clone(), subsystem.db_config) {
gum::warn!(target: LOG_TARGET, ?err, "Could not run approval vote DB sanity check");
}

Expand Down Expand Up @@ -1278,7 +1275,7 @@ async fn get_approval_signatures_for_candidate<Context>(
Some(e) => e,
};

let relay_hashes = entry.block_assignments.iter().map(|(relay_hash, _)| relay_hash);
let relay_hashes = entry.block_assignments.keys();

let mut candidate_indices = HashSet::new();
// Retrieve `CoreIndices`/`CandidateIndices` as required by approval-distribution:
Expand Down Expand Up @@ -2502,7 +2499,7 @@ async fn issue_approval<Context>(
};

let candidate_hash = match block_entry.candidate(candidate_index as usize) {
Some((_, h)) => h.clone(),
Some((_, h)) => *h,
None => {
gum::warn!(
target: LOG_TARGET,
Expand Down
2 changes: 1 addition & 1 deletion node/core/av-store/src/lib.rs
Original file line number Diff line number Diff line change
Expand Up @@ -61,7 +61,7 @@ const PRUNE_BY_TIME_PREFIX: &[u8; 13] = b"prune_by_time";

// We have some keys we want to map to empty values because existence of the key is enough. We use this because
// rocksdb doesn't support empty values.
const TOMBSTONE_VALUE: &[u8] = &*b" ";
const TOMBSTONE_VALUE: &[u8] = b" ";

/// Unavailable blocks are kept for 1 hour.
const KEEP_UNAVAILABLE_FOR: Duration = Duration::from_secs(60 * 60);
Expand Down
6 changes: 2 additions & 4 deletions node/core/backing/src/lib.rs
Original file line number Diff line number Diff line change
Expand Up @@ -482,9 +482,7 @@ impl TableContextTrait for TableContext {
}

fn is_member_of(&self, authority: &ValidatorIndex, group: &ParaId) -> bool {
self.groups
.get(group)
.map_or(false, |g| g.iter().position(|a| a == authority).is_some())
self.groups.get(group).map_or(false, |g| g.iter().any(|a| a == authority))
}

fn requisite_votes(&self, group: &ParaId) -> usize {
Expand All @@ -499,7 +497,7 @@ struct InvalidErasureRoot;
fn primitive_statement_to_table(s: &SignedFullStatement) -> TableSignedStatement {
let statement = match s.payload() {
Statement::Seconded(c) => TableStatement::Seconded(c.clone()),
Statement::Valid(h) => TableStatement::Valid(h.clone()),
Statement::Valid(h) => TableStatement::Valid(*h),
};

TableSignedStatement {
Expand Down
4 changes: 2 additions & 2 deletions node/core/candidate-validation/src/lib.rs
Original file line number Diff line number Diff line change
Expand Up @@ -502,7 +502,7 @@ async fn validate_candidate_exhaustive(
let _timer = metrics.time_validate_candidate_exhaustive();

let validation_code_hash = validation_code.hash();
let para_id = candidate_receipt.descriptor.para_id.clone();
let para_id = candidate_receipt.descriptor.para_id;
gum::debug!(
target: LOG_TARGET,
?validation_code_hash,
Expand All @@ -513,7 +513,7 @@ async fn validate_candidate_exhaustive(
if let Err(e) = perform_basic_checks(
&candidate_receipt.descriptor,
persisted_validation_data.max_pov_size,
&*pov,
&pov,
&validation_code_hash,
) {
gum::info!(target: LOG_TARGET, ?para_id, "Invalid candidate (basic checks)");
Expand Down
1 change: 1 addition & 0 deletions node/core/chain-selection/src/lib.rs
Original file line number Diff line number Diff line change
Expand Up @@ -381,6 +381,7 @@ async fn run<Context, B>(
) where
B: Backend,
{
#![allow(clippy::all)]
loop {
let res = run_until_error(
&mut ctx,
Expand Down
2 changes: 1 addition & 1 deletion node/core/dispute-coordinator/src/import.rs
Original file line number Diff line number Diff line change
Expand Up @@ -169,7 +169,7 @@ impl CandidateVoteState<CandidateVotes> {
}

/// Create a new `CandidateVoteState` from already existing votes.
pub fn new<'a>(votes: CandidateVotes, env: &CandidateEnvironment<'a>, now: Timestamp) -> Self {
pub fn new(votes: CandidateVotes, env: &CandidateEnvironment, now: Timestamp) -> Self {
let own_vote = OwnVoteState::new(&votes, env);

let n_validators = env.validators().len();
Expand Down
60 changes: 32 additions & 28 deletions node/core/dispute-coordinator/src/initialized.rs
Original file line number Diff line number Diff line change
Expand Up @@ -713,20 +713,22 @@ impl Initialized {
return Ok(ImportStatementsResult::InvalidImport)
}

let env =
match CandidateEnvironment::new(&*self.keystore, &self.rolling_session_window, session)
{
None => {
gum::warn!(
target: LOG_TARGET,
session,
"We are lacking a `SessionInfo` for handling import of statements."
);
let env = match CandidateEnvironment::new(
&self.keystore,
&self.rolling_session_window,
session,
) {
None => {
gum::warn!(
target: LOG_TARGET,
session,
"We are lacking a `SessionInfo` for handling import of statements."
);

return Ok(ImportStatementsResult::InvalidImport)
},
Some(env) => env,
};
return Ok(ImportStatementsResult::InvalidImport)
},
Some(env) => env,
};

let candidate_hash = candidate_receipt.hash();

Expand Down Expand Up @@ -1075,20 +1077,22 @@ impl Initialized {
"Issuing local statement for candidate!"
);
// Load environment:
let env =
match CandidateEnvironment::new(&*self.keystore, &self.rolling_session_window, session)
{
None => {
gum::warn!(
target: LOG_TARGET,
session,
"Missing info for session which has an active dispute",
);
let env = match CandidateEnvironment::new(
&self.keystore,
&self.rolling_session_window,
session,
) {
None => {
gum::warn!(
target: LOG_TARGET,
session,
"Missing info for session which has an active dispute",
);

return Ok(())
},
Some(env) => env,
};
return Ok(())
},
Some(env) => env,
};

let votes = overlay_db
.load_candidate_votes(session, &candidate_hash)?
Expand Down Expand Up @@ -1257,7 +1261,7 @@ fn make_dispute_message(
votes.invalid.iter().next().ok_or(DisputeMessageCreationError::NoOppositeVote)?;
let other_vote = SignedDisputeStatement::new_checked(
DisputeStatement::Invalid(*statement_kind),
our_vote.candidate_hash().clone(),
*our_vote.candidate_hash(),
our_vote.session_index(),
validators
.get(*validator_index)
Expand All @@ -1272,7 +1276,7 @@ fn make_dispute_message(
votes.valid.iter().next().ok_or(DisputeMessageCreationError::NoOppositeVote)?;
let other_vote = SignedDisputeStatement::new_checked(
DisputeStatement::Valid(*statement_kind),
our_vote.candidate_hash().clone(),
*our_vote.candidate_hash(),
our_vote.session_index(),
validators
.get(*validator_index)
Expand Down
2 changes: 1 addition & 1 deletion node/core/dispute-coordinator/src/participation/mod.rs
Original file line number Diff line number Diff line change
Expand Up @@ -235,7 +235,7 @@ impl Participation {
req: ParticipationRequest,
recent_head: Hash,
) -> FatalResult<()> {
if self.running_participations.insert(req.candidate_hash().clone()) {
if self.running_participations.insert(*req.candidate_hash()) {
let sender = ctx.sender().clone();
ctx.spawn(
"participation-worker",
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -204,7 +204,7 @@ impl Queues {
// Once https://github.com/rust-lang/rust/issues/62924 is there, we can use a simple:
// target.pop_first().
if let Some((comparator, _)) = target.iter().next() {
let comparator = comparator.clone();
let comparator = *comparator;
target
.remove(&comparator)
.map(|participation_request| (comparator, participation_request))
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -99,7 +99,7 @@ where
);

// Fetch the onchain disputes. We'll do a prioritization based on them.
let onchain = match get_onchain_disputes(sender, leaf.hash.clone()).await {
let onchain = match get_onchain_disputes(sender, leaf.hash).await {
Ok(r) => r,
Err(GetOnchainDisputesError::NotSupported(runtime_api_err, relay_parent)) => {
// Runtime version is checked before calling this method, so the error below should never happen!
Expand Down
4 changes: 2 additions & 2 deletions node/core/provisioner/src/lib.rs
Original file line number Diff line number Diff line change
Expand Up @@ -373,7 +373,7 @@ async fn send_inherent_data(

let disputes = match has_required_runtime(
from_job,
leaf.hash.clone(),
leaf.hash,
PRIORITIZED_SELECTION_RUNTIME_VERSION_REQUIREMENT,
)
.await
Expand Down Expand Up @@ -506,7 +506,7 @@ fn select_availability_bitfields(
bitfields.len()
);

selected.into_iter().map(|(_, b)| b).collect()
selected.into_values().collect()
}

/// Determine which cores are free, and then to the degree possible, pick a candidate appropriate to each free core.
Expand Down
Loading