-
Notifications
You must be signed in to change notification settings - Fork 950
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
protocols/kad: Do not attempt to store expired record in record store #1496
Conversation
`Kademlia::record_received` calculates the expiration time of a record before inserting it into the record store. Instead of inserting the record into the record store in any case, with this patch the record is only inserted if it is not expired. If the record is expired a `KademliaHandlerIn::Reset` for the given (sub) stream is triggered. This would serve as a tiny defense mechanism against an attacker trying to fill a node's record store with expired records before the record store's clean up procedure removes the records.
There are a few reasons why I think the correct thing to do is to send a regular acknowledgement:
|
|
With this commit the remote receives a [`KademliaHandlerIn::PutRecordRes`] even in the case where the record is discarded due to being expired. Given that the remote sent the local node a [`KademliaHandlerEvent::PutRecord`] request, the remote perceives the local node as one node among the k closest nodes to the target. Returning a [`KademliaHandlerIn::Reset`] instead of an [`KademliaHandlerIn::PutRecordRes`] to have the remote try another node would only result in the remote node to contact an even more distant node. In addition returning [`KademliaHandlerIn::PutRecordRes`] does not reveal any internal information to a possibly malicious remote node.
Not triggering a retry because:
At the same time reducing the amount of information returned to possibly malicious actors seems plausible. I added an additional commit to return a regular acknowledgement. @romanb can you take another look? |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
A few minor comments / suggestions, otherwise looks good to me.
protocols/kad/src/behaviour.rs
Outdated
handler: NotifyHandler::One(connection), | ||
event: KademliaHandlerIn::Reset(request_id) | ||
}) | ||
if !record.is_expired(Instant::now()) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
if !record.is_expired(Instant::now()) { | |
if !record.is_expired(now) { |
protocols/kad/src/behaviour.rs
Outdated
// request, the remote perceives the local node as one node among the k | ||
// closest nodes to the target. Returning a [`KademliaHandlerIn::Reset`] | ||
// instead of an [`KademliaHandlerIn::PutRecordRes`] to have the remote | ||
// try another node would only result in the remote node to contact an |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
In the current implementation, there would not be a retry on another node. Once the iterative query produces the k
closest nodes to put a record to, this fixed set receives these requests. If one of these requests fails, it counts as (and is reported as) a failure w.r.t. the required quorum for the write operation. So I would suggest to remove the second part of this comment. The primary reason for not returning an error is because that is precisely how the outwards caching of records with decreasing TTL is supposed to work.
Recent round of suggestions applied. Thanks for the feedback. Let me know if you have any further comments. |
Kademlia::record_received
calculates the expiration time of a recordbefore inserting it into the record store. Instead of inserting the
record into the record store in any case, with this patch the record is
only inserted if it is not expired. If the record is expired a
KademliaHandlerIn::Reset
for the given (sub) stream is triggered.This would serve as a tiny defense mechanism against an attacker trying
to fill a node's record store with expired records before the record
store's clean up procedure removes the records.
This is a follow up for #1492 (comment). Instead of an issue I created a pull request right away given that the change set should be reasonably small.
Major discussion point: Should we trigger a
KademliaHandlerIn::PutRecordRes
or aKademliaHandlerIn::Reset
when the record is expired?As far as I can tell the former would inform the remote that the value was indeed put into the record store whereas the latter would simply have the remote remove the substream. The former would lead to an
on_success
call on the query, the latter would have the query retry another node.Consulting the specification did not reveal any suggestions. Reading the go implementation I failed to determine where the handler for a put event determines whether a given record is expired, given that it only seems to instantiate a
PublicKeyValidator
.I decided for resetting the connection and thereby not (wrongly) acknowledging that the node cached the record. What do you think?