-
Notifications
You must be signed in to change notification settings - Fork 0
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Spark RSR: weight measurements using node reputation #180
Comments
A related attack vector that we could mitigate using a similar mechanism:
I am proposing to include the stationId age in the weight too. See also: |
Attach vector: Generate many Station IDs, wait a month, now you have old ids to choose from |
Great point! There is a caveat, though: for each node (Station ID) and round, we choose the closest tasks using the following metric:
Where We can make it even more difficult for the attacker if we define Station ID age as the number of days when at least one measurement was accepted from that Station ID and reset the age back to zero for IDs with no accepted measurements in the last N days, where N can be around 30-90 (1-3 months). With N=30, an attacker can keep at most 32,400 "alive" Station IDs per IPv4 /24 subnet. |
If I have ~1k "warm" station ids at my proposal, (I got 1600 retrieval tasks in a round doing Anyways, I know this isn't the focus currently, but I think this conversation is just making clear for me that this is a tough problem. |
eta: 2025-02-06
Use the reputation introduced by #155 to give more weight to measurements from nodes with good reputations and less weight to nodes being too often in the minority.
This should make it more difficult for malicious parties to influence the fine-grained RSR score calculated from individual measurements.
Mitigate the following attack vector:
Related:
The text was updated successfully, but these errors were encountered: