Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Peer cache #4574

Merged
merged 11 commits into from
Apr 22, 2024
Merged

Peer cache #4574

merged 11 commits into from
Apr 22, 2024

Conversation

pwojcikdev
Copy link
Contributor

@pwojcikdev pwojcikdev commented Apr 17, 2024

This introduces a new peer_cache component that is responsible for managing cached peers. Previously storing peers was done by tcp_channels and loading by node, which is not the core responsibility of these classes.
Previous behavior was that only the latest set of peers was stored (ie. cache was cleared before every write). Now the cache keeps track of peers up to a certain cutoff age (by default 1 hour).
Another behavior change is that network reachout thread tries to contact cached peers. Previously this was only done once, on node startup. This should help with resiliency if for some reason node temporarily loses connection to its peers (eg. due to network outage).

@@ -4,12 +4,25 @@
nano::store::lmdb::peer::peer (nano::store::lmdb::component & store) :
store{ store } {};
Copy link
Contributor

@dsiganos dsiganos Apr 17, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

spurious semicolon
lesser point: the braces placement looks a bit strange stylistically

@dsiganos
Copy link
Contributor

dsiganos commented Apr 17, 2024

Should nano_node --debug_peers also print the timestamp?
The current print line of --debug_peers can be simplified to this:

std::cout << i->first.endpoint () << std::endl;

@gr0vity-dev heads-up: this might affect your scripts, I think you interact directly with the peers table, right?

@gr0vity-dev
Copy link
Contributor

Thanks for the heads-up.
The continuous testing scripts don't use the peers table directly anymore.

clemahieu
clemahieu previously approved these changes Apr 18, 2024
Copy link
Contributor

@clemahieu clemahieu left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This looks like a good cleanup.

Only thing I noticed was on naming, is this really a cache? Maybe peer_history?

Not a blocker.

@qwahzi qwahzi added this to the V27 milestone Apr 18, 2024
@dsiganos
Copy link
Contributor

We do not really need to store the timestamp into the database, it can be kept in RAM only.
Is it stored in the database for design simplicity?

@dsiganos
Copy link
Contributor

I think we should be storing the peer node ID in the peer database too.
But that is possibly beyond the scope of this PR.

dsiganos
dsiganos previously approved these changes Apr 21, 2024
}
lock.unlock ();

node.stats.inc (nano::stat::type::network, nano::stat::detail::loop_reachout);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Should this counter be loop_reachout_cached?

nano/node/peer_cache.cpp Outdated Show resolved Hide resolved
# Conflicts:
#	nano/lib/thread_roles.cpp
#	nano/lib/thread_roles.hpp
@pwojcikdev pwojcikdev dismissed stale reviews from dsiganos and clemahieu via fe01ecb April 22, 2024 15:22
@pwojcikdev
Copy link
Contributor Author

I fixed a few minor issues pointed by the review and renamed class to peer_history. The timestamps are stored in database for simplicity, storing them in ram could work but I don't think it's that important to optimize this. In the grand scheme of things storing an additional uint64 per peer is not going to make a difference.

@pwojcikdev pwojcikdev merged commit ea6557d into nanocurrency:develop Apr 22, 2024
22 of 26 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants