Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

protocols/kad: Do not attempt to store expired record in record store #1496

Merged
merged 4 commits into from
Mar 18, 2020

Conversation

mxinden
Copy link
Member

@mxinden mxinden commented Mar 12, 2020

Kademlia::record_received calculates the expiration time of a record
before inserting it into the record store. Instead of inserting the
record into the record store in any case, with this patch the record is
only inserted if it is not expired. If the record is expired a
KademliaHandlerIn::Reset for the given (sub) stream is triggered.

This would serve as a tiny defense mechanism against an attacker trying
to fill a node's record store with expired records before the record
store's clean up procedure removes the records.


This is a follow up for #1492 (comment). Instead of an issue I created a pull request right away given that the change set should be reasonably small.

Major discussion point: Should we trigger a KademliaHandlerIn::PutRecordRes or a KademliaHandlerIn::Reset when the record is expired?

As far as I can tell the former would inform the remote that the value was indeed put into the record store whereas the latter would simply have the remote remove the substream. The former would lead to an on_success call on the query, the latter would have the query retry another node.

Consulting the specification did not reveal any suggestions. Reading the go implementation I failed to determine where the handler for a put event determines whether a given record is expired, given that it only seems to instantiate a PublicKeyValidator.

I decided for resetting the connection and thereby not (wrongly) acknowledging that the node cached the record. What do you think?

`Kademlia::record_received` calculates the expiration time of a record
before inserting it into the record store. Instead of inserting the
record into the record store in any case, with this patch the record is
only inserted if it is not expired. If the record is expired a
`KademliaHandlerIn::Reset` for the given (sub) stream is triggered.

This would serve as a tiny defense mechanism against an attacker trying
to fill a node's record store with expired records before the record
store's clean up procedure removes the records.
@romanb
Copy link
Contributor

romanb commented Mar 13, 2020

I decided for resetting the connection and thereby not (wrongly) acknowledging that the node cached the record. What do you think?

There are a few reasons why I think the correct thing to do is to send a regular acknowledgement:

  1. It is the current behaviour.
  2. An honest node a always tries to send a record to the k closest nodes to the key found by an iterative query. If some node b receives such a store request and considers itself so far away from the key that the expiry is instant, then either a or b has a very distorted view of the overlay network, which may even be maliciously induced. In either case, it is not for b to judge by rejecting the record. Since b received the request, then evidently a considers b to be among the k closest to the key, so b stores it subject to its own modifications to the expiry according to its own view of the network. If that expiry is instant, so be it, that's not an error.
  3. Sending a regular acknowledgement does not reveal any information to a potentially maliciously motivated sender trying to temporarily flood a node's storage, i.e. it is not told through an error that the record key is considered "too far away" by the targeted node. Not unnecessarily giving away such information is usually beneficial.

@romanb
Copy link
Contributor

romanb commented Mar 13, 2020

  1. Requests to store a record are also received as a result of the "outwards" caching of records via targeted requests to single nodes, that happens during regular record lookups. Silently and instantly expiring records when such caching reaches the perimeter of the eligible caching region is, I think, precisely what is desired.

With this commit the remote receives a
[`KademliaHandlerIn::PutRecordRes`] even in the case where the record is
discarded due to being expired.  Given that the remote sent the local
node a [`KademliaHandlerEvent::PutRecord`] request, the remote perceives
the local node as one node among the k closest nodes to the target.
Returning a [`KademliaHandlerIn::Reset`] instead of an
[`KademliaHandlerIn::PutRecordRes`] to have the remote try another node
would only result in the remote node to contact an even more distant
node. In addition returning [`KademliaHandlerIn::PutRecordRes`] does not
reveal any internal information to a possibly malicious remote node.
@mxinden
Copy link
Member Author

mxinden commented Mar 13, 2020

Not triggering a retry because:

  • The local node is already among the k closest nodes the remote node is aware of.

  • Reaching "the perimeter of the eligible caching region" should not trigger further retries.

At the same time reducing the amount of information returned to possibly malicious actors seems plausible.

I added an additional commit to return a regular acknowledgement. @romanb can you take another look?

Copy link
Contributor

@romanb romanb left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

A few minor comments / suggestions, otherwise looks good to me.

handler: NotifyHandler::One(connection),
event: KademliaHandlerIn::Reset(request_id)
})
if !record.is_expired(Instant::now()) {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
if !record.is_expired(Instant::now()) {
if !record.is_expired(now) {

protocols/kad/src/behaviour.rs Show resolved Hide resolved
// request, the remote perceives the local node as one node among the k
// closest nodes to the target. Returning a [`KademliaHandlerIn::Reset`]
// instead of an [`KademliaHandlerIn::PutRecordRes`] to have the remote
// try another node would only result in the remote node to contact an
Copy link
Contributor

@romanb romanb Mar 16, 2020

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In the current implementation, there would not be a retry on another node. Once the iterative query produces the k closest nodes to put a record to, this fixed set receives these requests. If one of these requests fails, it counts as (and is reported as) a failure w.r.t. the required quorum for the write operation. So I would suggest to remove the second part of this comment. The primary reason for not returning an error is because that is precisely how the outwards caching of records with decreasing TTL is supposed to work.

@mxinden
Copy link
Member Author

mxinden commented Mar 18, 2020

Recent round of suggestions applied. Thanks for the feedback. Let me know if you have any further comments.

@romanb romanb merged commit 522020e into libp2p:master Mar 18, 2020
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants