Author: @guillaumemichel
This folder contains the data, plots, and scripts used for RFM19, DHT Routing Table Health. The report can be found here.
All plots from RFM19 report can be found in this folder.
Contains the data useful to run the measurement scripts. All data was obtained from the Nebula Crawler, and only the necessary information is inculuded in csv
files.
data/all-peerids.csv
contains the mappingnebula_id <-> peer_id
of all nodes observed by the Nebula Crawler between2022-02-16
and2022-05-03
data/nebula-peers-2crawls.csv
contains data from 2 successive Nebula crawls on2022-05-03
at06:00
and07:00
UTC. Each row of this file follows the formatcrawl_id, nebula_id, peer_id, neighbor0_nebula_id, neighbor1_nebula_id, ..., neighborN_nebula_id
. It corresponds to a row for each peer, for each crawl, the peer ID of the node and the nebula ID of all of the nodes in its routing table. Note that unreachable peers have a row, but don't have anyneighborX_nebula_id
entries. This data is enough to reconstruct the global routing table.
Two additional data files were too large to upload to Github, they are accessible on web3.storage.
data/nebula-peers-5crawls.csv
[CID:bafybeiaqr6csdcnxrkpx23oithpjtuhpzudq43pkciavygsfr3i5qz6dcy
] contains data from 5 successive Nebula crawls on2022-05-03
from03:00
to07:00
UTC.data/nebula-peers-1week.csv
[CID:bafybeif6u5pukl3szezll4myp77eebeyq6ywp7hveahyllutsj7hepxltq
] contains data from 28 non successive Nebula crawls from2022-04-19
until2022-04-26
every day at21:00
,03:00
,09:00
and15:00
UTC.
The following python notebook helped us analyse the data coming from the Nebula Crawler.
- postgres_to_csv.ipynb is used to query the Nebula Crawler database and generate the data
csv
files used in the data analysis. - dht-routing-table-health.ipynb contains the analysis of the DHT routing table health, and generates the plots.
In order to reproduce the results, you can download the data files and simply run the dht-routing-table-health.ipynb python notebook. You can generate new data from the Nebula Crawler and postgres_to_csv.ipynb or from any other data source, and use the script to produce the plots associated with the data.
py-binary-trie
package version 0.0.7 has to be installed using pip install binary-trie==0.0.7