Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

better output for ipfs stats bw #4740

Open
whyrusleeping opened this issue Feb 27, 2018 · 7 comments
Open

better output for ipfs stats bw #4740

whyrusleeping opened this issue Feb 27, 2018 · 7 comments
Labels
kind/enhancement A net-new feature or improvement to an existing feature

Comments

@whyrusleeping
Copy link
Member

This command should be able to give a breakdown of bandwidth usage per protocol or per peer (not individually, but in a table) so you can easily see which protocol or peer is consuming the most bandwidth.

@whyrusleeping whyrusleeping added the kind/enhancement A net-new feature or improvement to an existing feature label Feb 27, 2018
@olizilla
Copy link
Member

It would be really useful for the GUIs to be able to show bandwidth per CID, so we can do something like:

screen shot 2018-03-22 at 14 03 16

@olizilla
Copy link
Member

Related #2923

@whyrusleeping
Copy link
Member Author

I think the best we can reasonably do in the mid-term is to have bandwidth per session. Where most sessions will be created to transfer a given graph (rooted by some cid)

@alanshaw
Copy link
Member

I made a "top" for bandwidth:

screenshot-2018-4-12 peer bandwidth 1

https://github.com/tableflip/ipfs-peer-bw-example

Some things struck me whilst building this:

  1. Its difficult to build the bandwidth per peer table because ipfs.stats.bw only allows getting bandwidth for a single peer. I can't send off ~1000+ requests to ipfs.stats.bw for stats every 5s. It locks up the browser and the node. So they are stepped through in chunks.

    That approach gets round the perf issues but isn't great because it means parts of the table are not as up to date as other parts, and as the table grows, the less accurate it becomes.

    Where's the best place to request API changes? I'd like to be able to get info for multiple peers, or ideally just all peers (separately like this).

  2. I can't think of many reasons why I'd want to know which peer is using my bandwidth? The best I can think of is for blacklisting, but I don't think that's possible right now (at least through IPFS)...please correct me if I'm wrong!

    Instead, knowing which CIDs are using all my bandwidth would allow me to determine if I should pin something that isn't currently pinned, or if I should unpin something that is pinned.

@Stebalien
Copy link
Member

Where's the best place to request API changes? I'd like to be able to get info for multiple peers, or ideally just all peers (separately like this).

Right here.

I can't think of many reasons why I'd want to know which peer is using my bandwidth? The best I can think of is for blacklisting, but I don't think that's possible right now (at least through IPFS)...please correct me if I'm wrong!

It can be useful when tracking bandwidth usage within a cluster of IPFS nodes.

Instead, knowing which CIDs are using all my bandwidth would allow me to determine if I should pin something that isn't currently pinned, or if I should unpin something that is pinned.

I agree but we'll have to be very careful not to store too many CIDs in memory (can quickly run into performance issues).

@TUSF
Copy link
Contributor

TUSF commented Sep 20, 2018

Currently, the only way I can think of that would let you measure the bandwidth of any particular CID, is to monitor ipfs log tail, and capture all bitswap events for when a particular peer asks for a block.

However, this is far from ideal, as you we can't know how much of a block was actually succeffuly downloaded; just how many times it was "hit". Which I suppose may be "good enough" for most uses. Any UI can just "estimate" the bandwidth for any given block by multiplying how many times its been hit by the full size of the block.

@olizilla
Copy link
Member

olizilla commented Apr 8, 2019

I'm looking at how we can make npm-on-ipfs more compelling, and being able to show the users a sharing ratio per CID or really, per module feels like important info for helping encourage the co-hosting of things.

@raulk had a quick look in to it and suggested that the stats would need to be tracked in the bitswap impl...

they’re sourced from go-libp2p-metrics

stream write and reads just update a counter on the metrics lib: https://github.com/libp2p/go-libp2p-swarm/blob/cdade26faf91fc210386bf8b99dda069944d36fc/swarm_stream.go#L91

sharing ratio per CID sounds awesome, but that bookkeeping would have to happen in bitswap… the job of libp2p stops at the stream/protocol level

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/enhancement A net-new feature or improvement to an existing feature
Projects
None yet
Development

No branches or pull requests

5 participants