Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[release-v2.0] main: Use backported peer updates. #3390

Merged
merged 13 commits into from
Jun 19, 2024

Conversation

davecgh
Copy link
Member

@davecgh davecgh commented Jun 19, 2024

This updates the 2.0 release branch to use the latest version of the peer module which includes updates to improve the privacy propagation speed of inventory announcements as well as expire known inventory cache entries after a timeout.

In particular, the following updated module version is used:

  • github.com/decred/dcrd/peer/v3@v3.1.2

Note that it also cherry picks all of the commits included in updates to the peer/v3 module and the new dependency modules those updates rely on (container/lru and crypto/rand) to ensure they are also included in the release branch even though it is not strictly necessary since go.mod has been updated to require the new peer/v3.1.2 release and thus will pull in the new code. However, from past experience, not having code backported to modules available in the release branch too leads to headaches for devs building from source in their local workspace with overrides such as those in go.work.

davecgh and others added 13 commits June 19, 2024 14:45
This implements a new module named container/lru which provides two
efficient and full-featured generic least recently used (LRU) data
structures with additional support for optional configurable per-item
expiration timeouts via a time to live (TTL) mechanism.

As compared to the existing module that this intends to replace, this
new implementation makes use of generics introduced in Go 1.18 in order
to provide full type safety and avoid forced allocations that the
previous implementation based on interfaces required.

Both implementations are safe for use in multi-threaded (concurrent)
workloads and exhibit nearly early O(1) lookups, inserts, and deletions
with no additional heap allocations beyond the stored items.

As is one of the defining characteristics for LRU data structures, both
are limited to a configurable maximum number of items with eviction for
the least recently used entry when the limit is exceeded.

One of the new data structures is named `Set` and is tailored towards
use cases that involve storing a distinct collection of items with
existence testing.  The other one is named `Map` and is aimed at use
cases that require caching and retrieving values by key.

Both implementations support optional default TTLs for item expiration
as well as provide the option to override the default TTL on a per-item
basis.

An efficient lazy removal scheme is used such that expired items are
periodically removed when items are added or updated.  This approach
allows for efficient amortized removal of expired items without the need
for additional background tasks, timers or heap allocations.

The following shows the performance of the new LRU map and set:

MapPutNoExp       10619336   108.6 ns/op   0 B/op   0 allocs/op
MapPutWithExp     10065508   110.2 ns/op   0 B/op   0 allocs/op
MapGet            28248453    41.2 ns/op   0 B/op   0 allocs/op
MapExists         33551979    34.3 ns/op   0 B/op   0 allocs/op
MapPeek           29699367    37.6 ns/op   0 B/op   0 allocs/op
SetPutNoExp       10343293   109.7 ns/op   0 B/op   0 allocs/op
SetPutWithExp     10357228   110.1 ns/op   0 B/op   0 allocs/op
SetContains       28183766    41.2 ns/op   0 B/op   0 allocs/op
SetExists         34581334    34.6 ns/op   0 B/op   0 allocs/op

The following shows the performance versus the old interface-based LRU
for the overlapping functionality (KVCache -> Map, Cache -> Set):

name            old time/op    new time/op    delta
--------------------------------------------------------------------
MapPutNoExp     157.0ns ± 1%   108.0ns ± 2%   -31.06%  (p=0.008 n=5+5)
MapGet           46.2ns ± 1%   40.9ns ± 1%    -11.51%  (p=0.008 n=5+5)
MapContains      45.1ns ± 1%   34.2ns ± 2%    -24.24%  (p=0.008 n=5+5)
SetPutNoExp     145.0ns ± 2%   109.0ns ± 2%   -24.91%  (p=0.008 n=5+5)
SetContains      44.3ns ± 2%   41.5ns ± 3%     -6.50%  (p=0.016 n=5+5)

name            old alloc/op   new alloc/op   delta
--------------------------------------------------------------------
MapPutNoExp     16.0B ± 0%     0.0B            -100.00%  (p=0.008 n=5+5)
MapGet          0.00B          0.00B               ~     (all equal)
MapContains     0.00B          0.00B               ~     (all equal)
SetPutNoExp     8.00B ± 0%     0.00B           -100.00%  (p=0.008 n=5+5)
SetContains     0.00B          0.00B               ~     (all equal)

name            old allocs/op  new allocs/op  delta
--------------------------------------------------------------------
MapPutNoExp     2.00 ± 0%      0.00           -100.00%  (p=0.008 n=5+5)
MapGet          0.00           0.00               ~     (all equal)
MapExists       0.00           0.00               ~     (all equal)
SetPutNoExp     1.00 ± 0%      0.00           -100.00%  (p=0.008 n=5+5)
SetContains     0.00           0.00               ~     (all equal)
This updates the docs/README.md file, module hierarchy graphviz, and
module hierarchy diagram to reflect the new container/lru module.
This updates the peer known inventory and sent nonces LRU caches to use
the new container/lru module.

It also uses the expiration functionality of the new module to impose an
expiration time of 15 minutes on the known inventory entries.  This will
allow the possibility of the cache shrinking over time in periods of low
activity versus the current behavior where it will eventually reach the
limit and stay there indefinitely.
The github.com/decred/dcrd/crypto/rand module provides an alternative to the
standard library's math/rand, math/rand/v2, and crypto/rand packages.  It
implements a package-global fast userspace CSPRNG that never errors after
initial seeding at init time with the ability to create additional PRNGs
without locking overhead if needed.  In addition to providing random bytes,
the PRNG is also capable of generating cryptographically secure integers with
uniform distribution, and provides a Fisher-Yates shuffle function that can be
used to shuffle slices with random indexes.
To match the naming used by math/rand/v2, as well as staying consistent with
the already existing funcs IntN/UintN, uppercase all other Ns in the *{32,64}N
functions.
BenchmarkDcrdRead/4b        54476367      22 ns/op   0 B/op   0 allocs/op
BenchmarkDcrdRead/8b        43589287      28 ns/op   0 B/op   0 allocs/op
BenchmarkDcrdRead/32b       17633469      68 ns/op   0 B/op   0 allocs/op
BenchmarkDcrdRead/512b       1691400     709 ns/op   0 B/op   0 allocs/op
BenchmarkDcrdRead/1KiB        827288    1380 ns/op   0 B/op   0 allocs/op
BenchmarkDcrdRead/4KiB        220063    5475 ns/op   0 B/op   0 allocs/op
BenchmarkStdlibRead/4b       2659458     456 ns/op   0 B/op   0 allocs/op
BenchmarkStdlibRead/8b       2697830     448 ns/op   0 B/op   0 allocs/op
BenchmarkStdlibRead/32b      2696924     447 ns/op   0 B/op   0 allocs/op
BenchmarkStdlibRead/512b      735306    1710 ns/op   0 B/op   0 allocs/op
BenchmarkStdlibRead/1KiB      423681    2879 ns/op   0 B/op   0 allocs/op
BenchmarkStdlibRead/4KiB      113619   10524 ns/op   0 B/op   0 allocs/op
BenchmarkDcrdReadPRNG/4b    66678519      18 ns/op   0 B/op   0 allocs/op
BenchmarkDcrdReadPRNG/8b    48892782      24 ns/op   0 B/op   0 allocs/op
BenchmarkDcrdReadPRNG/32b   19831497      61 ns/op   0 B/op   0 allocs/op
BenchmarkDcrdReadPRNG/512b   1733780     685 ns/op   0 B/op   0 allocs/op
BenchmarkDcrdReadPRNG/1KiB    923146    1353 ns/op   0 B/op   0 allocs/op
BenchmarkDcrdReadPRNG/4KiB    215394    5390 ns/op   0 B/op   0 allocs/op
BenchmarkInt32N             35768257      32 ns/op   0 B/op   0 allocs/op
BenchmarkUint32N            38023416      33 ns/op   0 B/op   0 allocs/op
BenchmarkInt64N             39299421      31 ns/op   0 B/op   0 allocs/op
BenchmarkUint64N            40006666      31 ns/op   0 B/op   0 allocs/op
BenchmarkDuration           31579362      34 ns/op   0 B/op   0 allocs/op
BenchmarkShuffleSlice       42814939      28 ns/op   0 B/op   0 allocs/op
This updates the peer module to use dcrd/crypto/rand for address
shuffling and nonce generation.
This changes the inventory batching delay from a static 500ms ticker to a
random delay uniformly distributed between 100-500ms.  The intention is to
improve privacy while simultaneously increasing the network propagation speed
of inventoried messages (including transactions, blocks, and mixing messages)
and continuing to keep inventory batching useful and occurring when many
messages are inventoried in the same short duration.

As this change rolls out to more nodes on the network, this will cause not
only more random jitter into the sending and timing of messages, but also
change the message paths.  Currently, with every peer using a 500ms ticker, if
no changes occur to the graph of connected nodes, messages will always
propagate the nodes in the same order.  However, if all nodes are randomizing
their inventory batching delays, the next peer who will relay a message first
during the next hop will always be different.
This updates the 2.0 release branch to use the latest version of the
peer module which includes updates to improve the privacy propagation
speed of inventory announcements as well as expire known inventory cache
entries after a timeout.

In particular, the following updated module version is used:

- github.com/decred/dcrd/peer/v3@v3.1.2
@davecgh davecgh added this to the 2.0.3 milestone Jun 19, 2024
@davecgh davecgh merged commit 6dd7cdf into decred:release-v2.0 Jun 19, 2024
2 checks passed
@davecgh davecgh deleted the rel20_peer_backports branch June 19, 2024 20:03
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants