Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Statement Distribution Per Peer Rate Limit #3444

Merged
merged 31 commits into from
May 1, 2024
Merged
Show file tree
Hide file tree
Changes from 18 commits
Commits
Show all changes
31 commits
Select commit Hold shift + click to select a range
1b1ba25
minimal scaffolding
Overkillus Feb 22, 2024
3345aa6
respond task select overhaul
Overkillus Feb 22, 2024
1734c09
test adjustment
Overkillus Feb 23, 2024
cb1d210
fmt
Overkillus Feb 23, 2024
58d828d
Rate limit sending from the requester side
Overkillus Feb 26, 2024
b5e7e06
sender side rate limiter test
Overkillus Mar 20, 2024
456a466
spammer malus variant
Overkillus Mar 21, 2024
975e3bd
Duplicate requests
Overkillus Apr 2, 2024
7056159
Filtering test
Overkillus Apr 2, 2024
3f234e1
fmt
Overkillus Apr 2, 2024
7e817fa
prdoc
Overkillus Apr 2, 2024
913b234
pipeline
Overkillus Apr 2, 2024
e812dcc
increment test
Overkillus Apr 2, 2024
c01be29
Merge branch 'master' into mkz-statement-distribution-rate-limit
Overkillus Apr 2, 2024
af234ef
crate bump
Overkillus Apr 2, 2024
fdaaf4a
clippy nits
Overkillus Apr 2, 2024
e988c60
zombienet test simplification
Overkillus Apr 19, 2024
83cf297
review fixes
Overkillus Apr 19, 2024
37fd618
debug -> trace
Overkillus Apr 30, 2024
adf84a2
cleanup
Overkillus Apr 30, 2024
f0a40d5
metric registered
Overkillus Apr 30, 2024
6099498
reverting paste mistake
Overkillus Apr 30, 2024
07d44ee
registering max parallel requests metric
Overkillus Apr 30, 2024
09e7367
updating metrics in statement distribution respond task
Overkillus Apr 30, 2024
d1f50c5
fmt
Overkillus Apr 30, 2024
e9680d1
Merge branch 'master' into mkz-statement-distribution-rate-limit
Overkillus May 1, 2024
678b141
param typo
Overkillus May 1, 2024
7ff1ddd
Bump test numbering
Overkillus May 1, 2024
f6e8def
remove unused malus param
Overkillus May 1, 2024
8e79dc4
tracing
Overkillus May 1, 2024
a38762b
fmt
Overkillus May 1, 2024
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
8 changes: 8 additions & 0 deletions .gitlab/pipeline/zombienet/polkadot.yml
Original file line number Diff line number Diff line change
Expand Up @@ -166,6 +166,14 @@ zombienet-polkadot-functional-0012-elastic-scaling-mvp:
--local-dir="${LOCAL_DIR}/functional"
--test="0012-elastic-scaling-mvp.zndsl"

zombienet-polkadot-functional-0013-spam-statement-distribution-requests:
extends:
- .zombienet-polkadot-common
script:
- /home/nonroot/zombie-net/scripts/ci/run-test-local-env-manager.sh
--local-dir="${LOCAL_DIR}/functional"
--test="0013-spam-statement-distribution-requests.zndsl"

zombienet-polkadot-smoke-0001-parachains-smoke-test:
extends:
- .zombienet-polkadot-common
Expand Down
1 change: 1 addition & 0 deletions Cargo.lock

Some generated files are not rendered by default. Learn more about how customized files appear on GitHub.

1 change: 1 addition & 0 deletions polkadot/node/malus/Cargo.toml
Original file line number Diff line number Diff line change
Expand Up @@ -37,6 +37,7 @@ polkadot-node-core-dispute-coordinator = { path = "../core/dispute-coordinator"
polkadot-node-core-candidate-validation = { path = "../core/candidate-validation" }
polkadot-node-core-backing = { path = "../core/backing" }
polkadot-node-primitives = { path = "../primitives" }
polkadot-node-network-protocol = { path = "../network/protocol" }
polkadot-primitives = { path = "../../primitives" }
color-eyre = { version = "0.6.1", default-features = false }
assert_matches = "1.5"
Expand Down
7 changes: 7 additions & 0 deletions polkadot/node/malus/src/malus.rs
Original file line number Diff line number Diff line change
Expand Up @@ -40,6 +40,8 @@ enum NemesisVariant {
DisputeAncestor(DisputeAncestorOptions),
/// Delayed disputing of finalized candidates.
DisputeFinalizedCandidates(DisputeFinalizedCandidatesOptions),
/// Spam many request statements instead of sending a single one.
SpamStatementRequests(SpamStatementRequestsOptions),
}

#[derive(Debug, Parser)]
Expand Down Expand Up @@ -98,6 +100,11 @@ impl MalusCli {
finality_delay,
)?
},
NemesisVariant::SpamStatementRequests(opts) => {
let SpamStatementRequestsOptions { spam_factor, cli } = opts;

polkadot_cli::run_node(cli, SpamStatementRequests { spam_factor }, finality_delay)?
},
}
Ok(())
}
Expand Down
2 changes: 2 additions & 0 deletions polkadot/node/malus/src/variants/mod.rs
Original file line number Diff line number Diff line change
Expand Up @@ -20,13 +20,15 @@ mod back_garbage_candidate;
mod common;
mod dispute_finalized_candidates;
mod dispute_valid_candidates;
mod spam_statement_requests;
mod suggest_garbage_candidate;
mod support_disabled;

pub(crate) use self::{
back_garbage_candidate::{BackGarbageCandidateOptions, BackGarbageCandidates},
dispute_finalized_candidates::{DisputeFinalizedCandidates, DisputeFinalizedCandidatesOptions},
dispute_valid_candidates::{DisputeAncestorOptions, DisputeValidCandidates},
spam_statement_requests::{SpamStatementRequests, SpamStatementRequestsOptions},
suggest_garbage_candidate::{SuggestGarbageCandidateOptions, SuggestGarbageCandidates},
support_disabled::{SupportDisabled, SupportDisabledOptions},
};
Expand Down
161 changes: 161 additions & 0 deletions polkadot/node/malus/src/variants/spam_statement_requests.rs
Original file line number Diff line number Diff line change
@@ -0,0 +1,161 @@
// Copyright (C) Parity Technologies (UK) Ltd.
// This file is part of Polkadot.

// Polkadot is free software: you can redistribute it and/or modify
// it under the terms of the GNU General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.

// Polkadot is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU General Public License for more details.

// You should have received a copy of the GNU General Public License
// along with Polkadot. If not, see <http://www.gnu.org/licenses/>.

//! A malicious node variant that attempts spam statement requests.
//!
//! This malus variant behaves honestly in everything except when propagating statement distribution
//! requests through the network bridge subsystem. Instead of sending a single request when it needs
//! something it attempts to spam the peer with multiple requests.
//!
//! Attention: For usage with `zombienet` only!

#![allow(missing_docs)]

use polkadot_cli::{
service::{
AuxStore, Error, ExtendedOverseerGenArgs,
Overseer, OverseerConnector, OverseerGen, OverseerGenArgs, OverseerHandle,
},
validator_overseer_builder, Cli,
};
use polkadot_node_network_protocol::request_response::{outgoing::Requests, OutgoingRequest};
use polkadot_node_subsystem::{messages::NetworkBridgeTxMessage, SpawnGlue};
use polkadot_node_subsystem_types::{ChainApiBackend, RuntimeApiSubsystemClient};
use sp_core::traits::SpawnNamed;

// Filter wrapping related types.
use crate::{interceptor::*, shared::MALUS};

use std::sync::Arc;

/// Wraps around network bridge and replaces it.
#[derive(Clone)]
struct RequestSpammer<Spawner> {
spawner: Spawner, //stores the actual network bridge subsystem spawner
spam_factor: u32, // How many statement distribution requests to send.
}

impl<Sender, Spawner> MessageInterceptor<Sender> for RequestSpammer<Spawner>
where
Sender: overseer::NetworkBridgeTxSenderTrait + Clone + Send + 'static,
Spawner: overseer::gen::Spawner + Clone + 'static,
{
type Message = NetworkBridgeTxMessage;

/// Intercept NetworkBridgeTxMessage::SendRequests with Requests::AttestedCandidateV2 inside and
/// duplicate that request
fn intercept_incoming(
&self,
_subsystem_sender: &mut Sender,
msg: FromOrchestra<Self::Message>,
) -> Option<FromOrchestra<Self::Message>> {
match msg {
FromOrchestra::Communication {
msg: NetworkBridgeTxMessage::SendRequests(requests, if_disconnected),
} => {
let mut new_requests = Vec::new();

for request in requests {
match request {
Requests::AttestedCandidateV2(ref req) => {
// Temporarily store peer and payload for duplication
let peer_to_duplicate = req.peer.clone();
let payload_to_duplicate = req.payload.clone();
// Push the original request
new_requests.push(request);

// Duplicate for spam purposes
gum::info!(
target: MALUS,
"😈 Duplicating AttestedCandidateV2 request extra {:?} times to peer: {:?}.", self.spam_factor, peer_to_duplicate,
);
new_requests.extend((0..self.spam_factor - 1).map(|_| {
let (new_outgoing_request, _) = OutgoingRequest::new(
peer_to_duplicate.clone(),
payload_to_duplicate.clone(),
);
Requests::AttestedCandidateV2(new_outgoing_request)
}));

}
_ => {
new_requests.push(request);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We can dedup this line by moving it above the match.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This one is somewhat of a conscious trade-off but if you see a graceful way of doing it feel free to suggest it.

Problem is Request does not implement copy/clone. I could possibly track the index where I push it but this will make it even less readable imo.

}
}
}

// Passthrough the message with a potentially modified number of requests
Some(FromOrchestra::Communication {
msg: NetworkBridgeTxMessage::SendRequests(new_requests, if_disconnected),
})
},
FromOrchestra::Communication { msg } => Some(FromOrchestra::Communication { msg }),
FromOrchestra::Signal(signal) => Some(FromOrchestra::Signal(signal)),
}
}
}

//----------------------------------------------------------------------------------

#[derive(Debug, clap::Parser)]
#[clap(rename_all = "kebab-case")]
#[allow(missing_docs)]
pub struct SpamStatementRequestsOptions {
/// How many statement distribution requests to send.
#[clap(long, ignore_case = true, default_value_t = 1000, value_parser = clap::value_parser!(u32).range(0..=10000000))]
pub spam_factor: u32,

#[clap(flatten)]
pub cli: Cli,
}

/// SpamStatementRequests implementation wrapper which implements `OverseerGen` glue.
pub(crate) struct SpamStatementRequests {
/// How many statement distribution requests to send.
pub spam_factor: u32,
}

impl OverseerGen for SpamStatementRequests {
fn generate<Spawner, RuntimeClient>(
&self,
connector: OverseerConnector,
args: OverseerGenArgs<'_, Spawner, RuntimeClient>,
ext_args: Option<ExtendedOverseerGenArgs>,
) -> Result<(Overseer<SpawnGlue<Spawner>, Arc<RuntimeClient>>, OverseerHandle), Error>
where
RuntimeClient: RuntimeApiSubsystemClient + ChainApiBackend + AuxStore + 'static,
Spawner: 'static + SpawnNamed + Clone + Unpin,
{
gum::info!(
target: MALUS,
"😈 Started Malus node that duplicates each statement distribution request spam_factor = {:?} times.",
&self.spam_factor,
);

let request_spammer = RequestSpammer {
spawner: SpawnGlue(args.spawner.clone()),
spam_factor: self.spam_factor,
};

validator_overseer_builder(
args,
ext_args.expect("Extended arguments required to build validator overseer are provided"),
)?
.replace_network_bridge_tx(move |cb| InterceptedSubsystem::new(cb, request_spammer))
.build_with_connector(connector)
.map_err(|e| e.into())
}
}
71 changes: 48 additions & 23 deletions polkadot/node/network/statement-distribution/src/v2/mod.rs
Original file line number Diff line number Diff line change
Expand Up @@ -58,6 +58,8 @@ use sp_keystore::KeystorePtr;
use fatality::Nested;
use futures::{
channel::{mpsc, oneshot},
future::FutureExt,
select,
stream::FuturesUnordered,
SinkExt, StreamExt,
};
Expand Down Expand Up @@ -3350,33 +3352,56 @@ pub(crate) async fn respond_task(
mut sender: mpsc::Sender<ResponderMessage>,
) {
let mut pending_out = FuturesUnordered::new();
let mut active_peers = HashSet::new();

loop {
// Ensure we are not handling too many requests in parallel.
if pending_out.len() >= MAX_PARALLEL_ATTESTED_CANDIDATE_REQUESTS as usize {
// Wait for one to finish:
pending_out.next().await;
}
select! {
// New request
request_result = receiver.recv(|| vec![COST_INVALID_REQUEST]).fuse() => {
let request = match request_result.into_nested() {
Ok(Ok(v)) => v,
Err(fatal) => {
gum::debug!(target: LOG_TARGET, error = ?fatal, "Shutting down request responder");
return
},
Ok(Err(jfyi)) => {
gum::debug!(target: LOG_TARGET, error = ?jfyi, "Decoding request failed");
continue
},
};

let req = match receiver.recv(|| vec![COST_INVALID_REQUEST]).await.into_nested() {
Ok(Ok(v)) => v,
Err(fatal) => {
gum::debug!(target: LOG_TARGET, error = ?fatal, "Shutting down request responder");
return
// If peer currently being served drop request
if active_peers.contains(&request.peer) {
alexggh marked this conversation as resolved.
Show resolved Hide resolved
gum::debug!(target: LOG_TARGET, "Peer already being served, dropping request");
continue
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Shouldn't this be handled in the context of a candidate/relay parent ? Otherwise we would drop legit requests.

Copy link
Contributor Author

@Overkillus Overkillus Feb 26, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is definitely a fair point. Rob mentioned it https://github.com/paritytech-secops/srlabs_findings/issues/303#issuecomment-1588185410 as well.

It's a double-edged sword. If we make it 1 per context then the request/response protocol is more efficient, but with async backing enabled (which to my understanding broadens the scope of possible contexts) the mpsc channel (size 25-ish) can still be easily overwhelmed. The problem of dropped requests gets somewhat a bit better after the recent commit [Rate limit sending from the requester side](https://github.com/paritytech/polkadot-sdk/pull/3444/commits/58d828db45b7c023e8a8266d3180be4361723519) which also introduces the rate limit on the requester side so effort is not wasted.

If we are already requesting from a peer and want to request something else as well we simply wait for the first one to finish.

What could be argued is the limit as a small constant 1-3 instead of 1 per peerID. I would like to do some testing and more reviews and changing it to a configurable constant instead of a hard limit to 1 is actually a trivial change. (I don't think it's necessary. Better have a single completely sent candidate than two half sent candidates.)

Truth be told this whole solution is far from ideal. It's a fix but the final solution is to manage DoS at a significantly lower level than in parachain consensus. It will do for now but DoS needs to be looked on in more detail in near future.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is definitely a fair point. Rob mentioned it paritytech-secops/srlabs_findings#303 (comment) as well.

It's a double-edged sword. If we make it 1 per context then the request/response protocol is more efficient, but with async backing enabled (which to my understanding broadens the scope of possible contexts) the mpsc channel (size 25-ish) can still be easily overwhelmed. The problem of dropped requests gets somewhat a bit better after the recent commit [Rate limit sending from the requester side](https://github.com/paritytech/polkadot-sdk/pull/3444/commits/58d828db45b7c023e8a8266d3180be4361723519) which also introduces the rate limit on the requester side so effort is not wasted.

If we are already requesting from a peer and want to request something else as well we simply wait for the first one to finish.

I would rather put the constraint on the responder side rather than the requester. The reasoning is to avoid any bugs in other clients implementation due to the subtle requirement to wait for first request to finish.

What could be argued is the limit as a small constant 1-3 instead of 1 per peerID. I would like to do some testing and more reviews and changing it to a configurable constant instead of a hard limit to 1 is actually a trivial change. (I don't think it's necessary. Better have a single completely sent candidate than two half sent candidates.)

I totally agree with DoS protection as low as possible in the stack, but it seems to be sensible. Maybe instead of relying on a hardcoded constant we can derive the value based on the async backing parameters.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I would rather put the constraint on the responder side rather than the requester. The reasoning is to avoid any bugs in other clients implementation due to the subtle requirement to wait for first request to finish.

I agree that constraint on the responder side is more important, but having it in both places seems even better.

The reasoning is to avoid any bugs in other clients implementation due to the subtle requirement to wait for first request to finish.

Even without this PR the risk seems exactly the same. We already were checking for the number of parallel requests being handled in the requester so even if there would be bugs in other clients they could surface there with or without this PR.

The only really bad scenario would be if for some reason we keep a PeerID marked as still being active when it really isn't effectively blacklisting that peer. If that would happen it means that the future connected to that PeerID is still in the pending_responses which would eventually brick the requester anyway since we cannotgo over the max limit of MAX_PARALLEL_ATTESTED_CANDIDATE_REQUESTS. Both situations are just as fatal, but the new system at least has extra protections against DoS attempts so it's a security gain.

By adding the constraint on the requester side we limit the wasted effort and can potentially safely add reputation updates if we get those unsolicited requests which protects us from DoS even further. (Some honest requests might slip through so the rep update needs to be small.)

Maybe instead of relying on a hardcoded constant we can derive the value based on the async backing parameters.

We potentially could but also I don't why this value would need to change a lot. Even if we change some async backing params we should be fine. It seems to be more sensitive to channel size as we want to make it hard to dominate that queue as a malicious node and the MAX_PARALLEL_ATTESTED_CANDIDATE_REQUESTS.

Copy link

@burdges burdges Mar 21, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You need not choose a uniform distribution over recent relay parents either. Instead the full value on the first couple, then 1/2 and 1/4 for a few, and then 1 one for a while. You need to figure out if the selected distribution breaks collators, but maybe works.

}

// If we are over parallel limit wait for one to finish
if pending_out.len() >= MAX_PARALLEL_ATTESTED_CANDIDATE_REQUESTS as usize {
gum::debug!(target: LOG_TARGET, "Over max parallel requests, waiting for one to finish");
Overkillus marked this conversation as resolved.
Show resolved Hide resolved
let (_, peer) = pending_out.select_next_some().await;
active_peers.remove(&peer);
}

// Start serving the request
let (pending_sent_tx, pending_sent_rx) = oneshot::channel();
let peer = request.peer;
if let Err(err) = sender
.feed(ResponderMessage { request, sent_feedback: pending_sent_tx })
.await
{
gum::debug!(target: LOG_TARGET, ?err, "Shutting down responder");
return
}
let future_with_peer = pending_sent_rx.map(move |result| (result, peer));
pending_out.push(future_with_peer);
active_peers.insert(peer);
},
Ok(Err(jfyi)) => {
gum::debug!(target: LOG_TARGET, error = ?jfyi, "Decoding request failed");
continue
// Request served/finished
result = pending_out.select_next_some() => {
let (_, peer) = result;
active_peers.remove(&peer);
},
};

let (pending_sent_tx, pending_sent_rx) = oneshot::channel();
if let Err(err) = sender
.feed(ResponderMessage { request: req, sent_feedback: pending_sent_tx })
.await
{
gum::debug!(target: LOG_TARGET, ?err, "Shutting down responder");
return
}
pending_out.push(pending_sent_rx);
}
}
Loading
Loading