Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Statement Distribution Per Peer Rate Limit #3444

Merged
merged 31 commits into from
May 1, 2024

Conversation

Overkillus
Copy link
Contributor

@Overkillus Overkillus commented Feb 22, 2024

  • Drop requests from a PeerID that is already being served by us.
  • Don't sent requests to a PeerID if we already are requesting something from them at that moment (prioritise other requests or wait).
  • Tests
  • Add a small rep update for unsolicited requests (same peer request) not included in original PR due to potential issues with nodes slowly updating
  • Add a metric to track the amount of dropped requests due to peer rate limiting
  • Add a metric to track how many time a node reaches the max parallel requests limit in v2+

Helps with but does not close yet: https://github.com/paritytech-secops/srlabs_findings/issues/303

return
// If peer currently being served drop request
if active_peers.contains(&request.peer) {
continue
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Shouldn't this be handled in the context of a candidate/relay parent ? Otherwise we would drop legit requests.

Copy link
Contributor Author

@Overkillus Overkillus Feb 26, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is definitely a fair point. Rob mentioned it https://github.com/paritytech-secops/srlabs_findings/issues/303#issuecomment-1588185410 as well.

It's a double-edged sword. If we make it 1 per context then the request/response protocol is more efficient, but with async backing enabled (which to my understanding broadens the scope of possible contexts) the mpsc channel (size 25-ish) can still be easily overwhelmed. The problem of dropped requests gets somewhat a bit better after the recent commit [Rate limit sending from the requester side](https://github.com/paritytech/polkadot-sdk/pull/3444/commits/58d828db45b7c023e8a8266d3180be4361723519) which also introduces the rate limit on the requester side so effort is not wasted.

If we are already requesting from a peer and want to request something else as well we simply wait for the first one to finish.

What could be argued is the limit as a small constant 1-3 instead of 1 per peerID. I would like to do some testing and more reviews and changing it to a configurable constant instead of a hard limit to 1 is actually a trivial change. (I don't think it's necessary. Better have a single completely sent candidate than two half sent candidates.)

Truth be told this whole solution is far from ideal. It's a fix but the final solution is to manage DoS at a significantly lower level than in parachain consensus. It will do for now but DoS needs to be looked on in more detail in near future.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is definitely a fair point. Rob mentioned it paritytech-secops/srlabs_findings#303 (comment) as well.

It's a double-edged sword. If we make it 1 per context then the request/response protocol is more efficient, but with async backing enabled (which to my understanding broadens the scope of possible contexts) the mpsc channel (size 25-ish) can still be easily overwhelmed. The problem of dropped requests gets somewhat a bit better after the recent commit [Rate limit sending from the requester side](https://github.com/paritytech/polkadot-sdk/pull/3444/commits/58d828db45b7c023e8a8266d3180be4361723519) which also introduces the rate limit on the requester side so effort is not wasted.

If we are already requesting from a peer and want to request something else as well we simply wait for the first one to finish.

I would rather put the constraint on the responder side rather than the requester. The reasoning is to avoid any bugs in other clients implementation due to the subtle requirement to wait for first request to finish.

What could be argued is the limit as a small constant 1-3 instead of 1 per peerID. I would like to do some testing and more reviews and changing it to a configurable constant instead of a hard limit to 1 is actually a trivial change. (I don't think it's necessary. Better have a single completely sent candidate than two half sent candidates.)

I totally agree with DoS protection as low as possible in the stack, but it seems to be sensible. Maybe instead of relying on a hardcoded constant we can derive the value based on the async backing parameters.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I would rather put the constraint on the responder side rather than the requester. The reasoning is to avoid any bugs in other clients implementation due to the subtle requirement to wait for first request to finish.

I agree that constraint on the responder side is more important, but having it in both places seems even better.

The reasoning is to avoid any bugs in other clients implementation due to the subtle requirement to wait for first request to finish.

Even without this PR the risk seems exactly the same. We already were checking for the number of parallel requests being handled in the requester so even if there would be bugs in other clients they could surface there with or without this PR.

The only really bad scenario would be if for some reason we keep a PeerID marked as still being active when it really isn't effectively blacklisting that peer. If that would happen it means that the future connected to that PeerID is still in the pending_responses which would eventually brick the requester anyway since we cannotgo over the max limit of MAX_PARALLEL_ATTESTED_CANDIDATE_REQUESTS. Both situations are just as fatal, but the new system at least has extra protections against DoS attempts so it's a security gain.

By adding the constraint on the requester side we limit the wasted effort and can potentially safely add reputation updates if we get those unsolicited requests which protects us from DoS even further. (Some honest requests might slip through so the rep update needs to be small.)

Maybe instead of relying on a hardcoded constant we can derive the value based on the async backing parameters.

We potentially could but also I don't why this value would need to change a lot. Even if we change some async backing params we should be fine. It seems to be more sensitive to channel size as we want to make it hard to dominate that queue as a malicious node and the MAX_PARALLEL_ATTESTED_CANDIDATE_REQUESTS.

Copy link

@burdges burdges Mar 21, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You need not choose a uniform distribution over recent relay parents either. Instead the full value on the first couple, then 1/2 and 1/4 for a few, and then 1 one for a while. You need to figure out if the selected distribution breaks collators, but maybe works.

Copy link
Contributor

@alexggh alexggh left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Added some comments, what is not clear to me is why can_request from answer_request is not considered fast enough, that already has the benefit that we punish peers that are misbehaving.

Just my2cents, if what we want is to prevent peers from exhausting the buget for the number of connections, I would think the proper fix needs to go deeper into the layer network, rather than here, since there is a lot time spent until the message reaches here.

@Overkillus
Copy link
Contributor Author

re: @alexggh

Added some comments, what is not clear to me is why can_request from answer_request is not considered fast enough, that already has the benefit that we punish peers that are misbehaving.

Technically fair point but I think there's a reason. It's not about speed but technically yeah respond_task is a bit quicker than answer_request so it seems like a minimal gain here. It also would logically make sense for this logic to be in something called can_request.

Even before this change we tracked the parallel requests up to the limit (MAX_PARALLEL_ATTESTED_CANDIDATE_REQUESTS) in respond_task and my guess why is simply that respond_task encapsulates the async component of the request filtering procedure while can_request focuses on the simpler sync filtering. To properly filter out the most current PeerID this logic here needed to be async as well hence it's also in the async respond_task.

The can_request is primarily doing the filtering based on the knowledge we have about what was sent/received previously while the async respond_task is very time sensitive as we are operating on stuff that is currently being sent over.

Just my2cents, if what we want is to prevent peers from exhausting the buget for the number of connections, I would think the proper fix needs to go deeper into the layer network, rather than here, since there is a lot time spent until the message reaches here.

I 100% agree with you here. I said this myself and me and Robert have already spoken about it and this approach will be pursued. That's why this solution is not ideal but it at least makes the attack vector much harder to pull off with minimal changes to the current code-base. Not ideal but worth the tiny effort before we tackle the wider more general issue at the lower level.

@Overkillus Overkillus self-assigned this Apr 2, 2024
@Overkillus Overkillus marked this pull request as ready for review April 2, 2024 08:38
@Overkillus Overkillus added T0-node This PR/Issue is related to the topic “node”. I1-security The node fails to follow expected, security-sensitive, behaviour. T8-polkadot This PR/Issue is related to/affects the Polkadot network. labels Apr 2, 2024
@Overkillus Overkillus requested a review from a team as a code owner April 2, 2024 09:44
msg: NetworkBridgeTxMessage::SendRequests(mut requests, if_disconnected),
} => {
// AttestedCandidateV2 requests arrive 1 by 1
if requests.len() == 1 {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This might be true for now, but can be easily broken in the future. Why not loop through all incoming requests ?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Handled multiple requests 👍

Comment on lines 79 to 86
for _ in 0..self.spam_factor - 1 {
let (new_outgoing_request, _) = OutgoingRequest::new(
peer_to_duplicate.clone(),
payload_to_duplicate.clone(),
);
let new_request = Requests::AttestedCandidateV2(new_outgoing_request);
requests.push(new_request);
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It should work without needing peer_to_duplicate and payload_to_duplicate clones:

Suggested change
for _ in 0..self.spam_factor - 1 {
let (new_outgoing_request, _) = OutgoingRequest::new(
peer_to_duplicate.clone(),
payload_to_duplicate.clone(),
);
let new_request = Requests::AttestedCandidateV2(new_outgoing_request);
requests.push(new_request);
}
requests.extend(0..self.spam_factor - 1).into_iter().map(|_|
Requests::AttestedCandidateV2(OutgoingRequest::new(
peer_to_duplicate.clone(),
payload_to_duplicate.clone(),
)))

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Borrowed the extend syntax but still need the clones to satisfy the borrow checker.

polkadot/node/network/statement-distribution/src/v2/mod.rs Outdated Show resolved Hide resolved
polkadot/node/network/statement-distribution/src/v2/mod.rs Outdated Show resolved Hide resolved
polkadot/node/network/statement-distribution/src/v2/mod.rs Outdated Show resolved Hide resolved
Copy link
Contributor

@sandreim sandreim left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nice work!


}
_ => {
new_requests.push(request);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We can dedup this line by moving it above the match.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This one is somewhat of a conscious trade-off but if you see a graceful way of doing it feel free to suggest it.

Problem is Request does not implement copy/clone. I could possibly track the index where I push it but this will make it even less readable imo.

polkadot/node/network/statement-distribution/src/v2/mod.rs Outdated Show resolved Hide resolved
Copy link
Contributor

@alexggh alexggh left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Good job!

@Overkillus Overkillus enabled auto-merge May 1, 2024 16:56
@Overkillus Overkillus added this pull request to the merge queue May 1, 2024
Merged via the queue into master with commit 6d392c7 May 1, 2024
140 checks passed
@Overkillus Overkillus deleted the mkz-statement-distribution-rate-limit branch May 1, 2024 17:41
dcolley added a commit to metaspan/polkadot-sdk that referenced this pull request May 6, 2024
* 'master' of https://github.com/metaspan/polkadot-sdk: (65 commits)
  Introduces `TypeWithDefault<T, D: Get<T>>` (paritytech#4034)
  Publish `polkadot-sdk-frame`  crate (paritytech#4370)
  Add validate field to prdoc (paritytech#4368)
  State trie migration on asset-hub westend and collectives westend (paritytech#4185)
  Fix: dust unbonded for zero existential deposit (paritytech#4364)
  Bridge: added subcommand to relay single parachain header (paritytech#4365)
  Bridge: fix zombienet tests (paritytech#4367)
  [WIP][CI] Add more GHA jobs (paritytech#4270)
  Allow for 0 existential deposit in benchmarks for `pallet_staking`, `pallet_session`, and `pallet_balances` (paritytech#4346)
  Deprecate `NativeElseWasmExecutor` (paritytech#4329)
  More `xcm::v4` cleanup and `xcm_fee_payment_runtime_api::XcmPaymentApi` nits (paritytech#4355)
  sc-tracing: enable env-filter feature (paritytech#4357)
  deps: update jsonrpsee to v0.22.5 (paritytech#4330)
  Add PoV-reclaim enablement guide to polkadot-sdk-docs (paritytech#4244)
  cargo: Update experimental litep2p to latest version (paritytech#4344)
  Bridge: ignore client errors when calling recently added `*_free_headers_interval` methods (paritytech#4350)
  Make parachain template great again (and async backing ready) (paritytech#4295)
  [Backport] Version bumps and reorg prdocs from 1.11.0 (paritytech#4336)
  HRMP - set `DefaultChannelSizeAndCapacityWithSystem` with dynamic values according to the `ActiveConfig` (paritytech#4332)
  Statement Distribution Per Peer Rate Limit (paritytech#3444)
  ...
TarekkMA pushed a commit to moonbeam-foundation/polkadot-sdk that referenced this pull request Aug 2, 2024
- [x] Drop requests from a PeerID that is already being served by us.
- [x] Don't sent requests to a PeerID if we already are requesting
something from them at that moment (prioritise other requests or wait).
- [x] Tests
- [ ] ~~Add a small rep update for unsolicited requests (same peer
request)~~ not included in original PR due to potential issues with
nodes slowly updating
- [x] Add a metric to track the amount of dropped requests due to peer
rate limiting
- [x] Add a metric to track how many time a node reaches the max
parallel requests limit in v2+

Helps with but does not close yet:
https://github.com/paritytech-secops/srlabs_findings/issues/303
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
I1-security The node fails to follow expected, security-sensitive, behaviour. T0-node This PR/Issue is related to the topic “node”. T8-polkadot This PR/Issue is related to/affects the Polkadot network.
Projects
Status: Backlog
Development

Successfully merging this pull request may close these issues.

5 participants