Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

RFC: Move provide from bitswap to ipfs #5840

Closed
wants to merge 2 commits into from

Conversation

michaelavila
Copy link
Contributor

Overview

The purpose of this PR is to address the first few steps of the providing strategy work that is being tracked here: #5774. Ultimately, I'd like to get this merged, but am putting up the PR now because I've run into an issue that will likely require quite a bit more work. I'm wondering, is it worth merging this PR in the meantime? Please read the known issues and discuss.

Things in this PR:

  • A Provider has been added to IpfsNode
  • A Provider has been added to CoreAPI
  • CoreAPI.Provider is being used by the rest of CoreAPI (and more) to provide blocks as they are added and received

Things not in this PR:

  • Reprovider anything
  • robust providing strategy

Known Issues

I'm looking for feedback on these things:

  • The most glaring issue, I think, is that cids are always provided when fetching, regardless of whether or not the blocks received come from the network. This means that running ipfs ls twice on, say, npm will cause all of npm to be provided twice. I put up this PR because solving this issue might require a more involved change and so I want to discuss it here before starting down a specific path. Some options (thanks @hannahhoward @eingenito):

    • Change the Blockservice.GetBlock interface so that I know whether or not the block returned came from the network. I could then use that information to decide whether or not to provide that block. This is perhaps both the simplest and the worst option.

    • Wrap the blockservice with something like a ProvideService, which keeps track of the blocks, perhaps in memory or a database, and also does the provide when necessary. In this case, whether or not we provide isn't based on whether the block came from the network, but rather whether or not we have any indication we haven't provided the block yet (based on the blocks we're keeping track of).

    • Mark the blocks to be provided themselves and continue to provide blocks as soon as they're received (from bitswap) just taking the new "provide markers" into account. In this case, I don't know that different repos will be able to have different providing strategies or at the very least we'd need to program for that if it were desired.

    • Or, there's some other, better idea that I'm not thinking of.

  • Not all of the provide calls happen in CoreAPI. This is because some things simply don't use CoreAPI. I think the solution to this is to change these areas that don't use CoreAPI so that they do. I hope we can solve this in a follow up PR.

  • No tests. No good, I'll get them added before merging.

  • Not rebased. I'll work on that as well.

@michaelavila michaelavila added the status/WIP This a Work In Progress label Dec 12, 2018
@michaelavila michaelavila requested a review from Kubuxu as a code owner December 12, 2018 21:15
@ghost ghost assigned michaelavila Dec 12, 2018
@ghost ghost added the status/in-progress In progress label Dec 12, 2018
@michaelavila michaelavila force-pushed the experiment/provide+reprovide branch from 33718df to e62a37e Compare December 12, 2018 21:19
License: MIT
Signed-off-by: Michael Avila <davidmichaelavila@gmail.com>
@michaelavila michaelavila force-pushed the experiment/provide+reprovide branch from e62a37e to 1fa7c84 Compare December 12, 2018 22:28
@Stebalien Stebalien removed their request for review December 12, 2018 23:25
License: MIT
Signed-off-by: Michael Avila <davidmichaelavila@gmail.com>

// eligibility strategies

func NewEligibleOnlyOnceStrategy() EligibleStrategy {
Copy link
Contributor Author

@michaelavila michaelavila Dec 13, 2018

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is a very simplistic solution to the problem of providing the same cids too often when adding/fetching.

@magik6k
Copy link
Member

magik6k commented Dec 13, 2018

Change the Blockservice.GetBlock interface so that I know whether or not the block returned came from the network. I could then use that information to decide whether or not to provide that block. This is perhaps both the simplest and the worst option.

Something similar came up recently in #5826 (comment), might be worth considering if there maybe is a nice way to expose the origin/destination of a block in blockservice

Not all of the provide calls happen in CoreAPI. This is because some things simply don't use CoreAPI. I think the solution to this is to change these areas that don't use CoreAPI so that they do. I hope we can solve this in a follow up PR.

This is what I intend to do - see #4498 (note that things like fuse will eventually get a port to CoreAPI and anything lower-level probably also wants the ability to provide manually anyways)

Copy link
Member

@magik6k magik6k left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

First quick pass, will review more tomorrow.

Providing when fetching is going to be a bit tricky - should we really provide in ls when we only fetch list of files in a directory and not the content itself for example?


type ProviderAPI interface {
// Announce that the given cid is being provided
Provide(cid.Cid)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This really should not be exposed in CoreAPI for a few reasons:

  • it's meant to be a stable interface
  • it should map as much 1-1 to commands as reasonably possible
  • it shouldn't expose too much [internals] in order to keep the API minimal

Providing is (was) hidden from the user and I wouldn't expose anything before we at least get the internals in order (global api option for changing providing strategy would be really nice though, but that can come later).

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Understood.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We should carefully consider this but I actually think we'll end up adding this API later anyways (although I agree we should hold off on that till we know).


// Handle all outgoing cids by providing them
func (p *Provider) handleOutgoing() {
limit := make(chan struct{}, provideOutgoingLimit)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You might want to use https://github.com/remeh/sizedwaitgroup (we have it gxed here), which does the same thing with arguably cleaner interface

@Stebalien
Copy link
Member

This may be your solution 3 or a combination of 2 and 3 but I'm thinking we'll want to:

  1. Explicitly mark blocks to be provided outside of the blockservice/bitswap.
  2. Record this information in a database (really, we may be able to just use the datastore for this as long as we use leveldb or badgerdb).
  3. Have an idle "initial provide" worker that walks over an on-disk queue of CIDs that have never been provided.
  4. Have an idle "reprovide" worker that walks over all CIDs to be reprovided in the background. We currently reprovide all at once but that's really not the right way to do this. Instead, we should be continuously walking over the set of things we want to provide in the background.

With this solution, I'd expect an interface like:

type Provider interface {
  // no context because this doesn't block.
  Provide(c cid.Cid) error
  Unprovide(c cid.Cid) error
}

When fetching a file, we'd have to somehow tell the provider service to provide blocks according to our strategy. This is where strategies come in. We'll probably want a special DAG walker for each strategy.

Key features:

  1. Providing never blocks a read.
  2. Providing is rate-limited and lazy (it currently clogs the dial queue).
  3. We don't queue anything in memory.
  4. The application (go-ipfs) gets full control.

Does this all make sense? I haven't been living in providers land like you have so I may be missing some important context.


I'm wondering, is it worth merging this PR in the meantime?

Given the that reading blocks out of the blockstore will force us to send a provider record, we shouldn't. That would kill performance.

@michaelavila
Copy link
Contributor Author

michaelavila commented Dec 13, 2018

@Stebalien thanks for the response, it makes sense. I have a few clarifying questions:

  1. Explicitly mark blocks to be provided outside of the blockservice/bitswap.

By "Explicitly mark the blocks" do you mean to actually add data to the block that says whether or not it's to be provided? I think that's the case, because we'd need the IPFS nodes receiving blocks from us to know which blocks we are providing and which we aren't, but maybe I'm mistaken. I can also imagine the scenario that we don't actually put data into the blocks, but it leaves open the question of how other nodes determine which blocks to FindProvider on. Maybe we are just deferring that task?

  1. Record this information in a database (really, we may be able to just use the datastore for this as long as we use leveldb or badgerdb).

By "this information" do you mean the CIDs that are being provided by our node? Then this would be the list we'd reprovide from? What happens if the IPFS node does not have the experimental DB features enabled?

  1. Have an idle "initial provide" worker that walks over an on-disk queue of CIDs that have never been provided.

I think this is straightforward. Just set up a worker that works through a queue of CIDs, providing whatever is there. I believe there's an unspecified step above, where we actually put CIDs into the "initial provide queue" if they don't already exist in the database. Is that true?

  1. Have an idle "reprovide" worker that walks over all CIDs to be reprovided in the background. We currently reprovide all at once but that's really not the right way to do this. Instead, we should be continuously walking over the set of things we want to provide in the background.

By "continuously walking over the set of things", do you mean that we are just constantly reproviding? As in, when we get to the end of the list, we go back to the beginning? Or is the reprovide worker being fed by some other process that decides when things should be reprovided? Or do we store some data related to reproviding and then, as we are continually going through the list of CIDs to reprovide, we skip based on some criteria (like some amount of time passing)?

When fetching a file, we'd have to somehow tell the provider service to provide blocks according to our strategy. This is where strategies come in. We'll probably want a special DAG walker for each strategy.

Do different, unrelated IPFS nodes need to provide the same dag of blocks with the same strategy? Or does the node fetching the blocks provide based on the strategy that node is configured with? Like I said above, I assume the purpose of "explicitly marking the blocks" is to indicate to other IPFS nodes which blocks we are providing, but I might be wrong.


I'm closing this PR, but I would like to continue the discussion here.

@ghost ghost removed the status/in-progress In progress label Dec 13, 2018
@michaelavila michaelavila changed the title Move provide from bitswap to ipfs RFC: Move provide from bitswap to ipfs Dec 13, 2018
@eingenito
Copy link
Contributor

eingenito commented Dec 13, 2018

Do different, unrelated IPFS nodes need to share the same dag of blocks with the same strategy? Or does the node fetching the blocks provide based on the strategy that node is configured with? Like I said above, I assume the purpose of "explicitly marking the blocks" is to indicate to other IPFS nodes which blocks we are providing, but I might be wrong.

It seems like marking blocks as provided is a best-effort hint thing that we'd do to help in finding providers. Different IPFS nodes could choose to be super helpful and (re)provide blocks as indicated (and that makes sense as a default, with maybe limiters to prevent DDOSing of the DHT, like we do now) or they could choose their own strategy. In addition blocks that are common across DAGs might not always deserve to be provided in all DAGs where they exist. I think it'd be neater if we could rely instead on a deterministic way to figure out which blocks to provide/are likely to have been provided. Bonus if it could work in a streaming (or walking) manner.

Either way I think it's probably necessary to have a 'walk up the tree and ask until you find providers' fallback capability when fetching data is stalled on sources.

I wonder about within a node, tracking the providing status of individual blocks or keeping track of only roots (as specified by Provide(c cid.Cid)) and, maybe someday, their associated providing strategies.

Have an idle "initial provide" worker that walks over an on-disk queue of CIDs that have never been provided.

and

Have an idle "reprovide" worker that walks over all CIDs to be reprovided in the background. We currently reprovide all at once but that's really not the right way to do this. Instead, we should be continuously walking over the set of things we want to provide in the background.

...seem really related in behavior. If providing is starting at a root and writing to a queue with a strategy, that could be the same as iterating through all known "provide" roots and running the same logic on reprovide. If at some point we want to offer things like ipfs add --noprovide or ipfs provide <strategy> <ref> then it would be really neat to just update the strategy of the root, and not worry about overlapping trees with different strategies. Different strategies would run on common blocks with different providing outcomes.

Maybe storing a 'provide' flag with each block is an important optimization, I don't know. Even in a global strategy per node scenario those flags would become invalid when a user switched their global strategy in config.

@michaelavila
Copy link
Contributor Author

Maybe we assume the first solution to finding more providers for an arbitrary block is to go back up through parent CIDs until you find a provider for one of them? Then we don't mark the blocks themselves and we just worry about the amount of provide announcements we send out when adding to the repository. Perhaps then the task of reducing the number of find providers can be worked out later. But this would result in an increase in FindProvider calls that might be prohibitive.

I wonder about within a node, tracking the providing status of individual blocks or keeping track of only roots (as specified by Provide(c cid.Cid)) and, maybe someday, their associated providing strategies.

What if we store just the CIDs that we are providing? That means anywhere from 1 to ALL of the CIDs in your repo would be in the database and therefore reprovided. I think in this situation the strategy would be applied only when adding and then it's not needed again.

Even in a global strategy worls those flags would become invalid when a user switched their global strategy in config.

I agree. This is why I'm asking if everyone needs to use the same strategy. If not, then marking the blocks themselves might make less sense.

@eingenito
Copy link
Contributor

Oh - when I say global I mean within to a single IPFS node. I think we have a lot of requirements right now where different nodes will want to use different strategies and I don't think we're going to come up with one strategy that will fit everyone's use cases. I think it's quite possible that different content within the same node would use different strategies - like some things it just makes sense to advertise only the root node. And some things you just don't want to advertise but you want to be able to fetch them locally, etc.

@Stebalien
Copy link
Member

Responding comment by comment: #5840 (comment)

By "Explicitly mark the blocks" do you mean to actually add data to the block that says whether or not it's to be provided? I think that's the case, because we'd need the IPFS nodes receiving blocks from us to know which blocks we are providing and which we aren't, but maybe I'm mistaken.

No, I mean record the CIDs of blocks we're providing in a database/datastore (outside of the blocks). This is just so that we know which blocks to provide.

For now, I wouldn't transmit this information at all. We can revisit this later and consider sending a bit saying which block we're providing along with the block in bitswap (as we also discussed in London) but we don't have to handle that yet.

but it leaves open the question of how other nodes determine which blocks to FindProvider on. Maybe we are just deferring that task?

I forgot about that issue... For now, I'd defer it. Even if we have to set the default strategy to "provide everything" (for now), simply moving providing out of the hot path is a significant step forwards. Unfortunately, any real solution to this requires a link/graph database so we can walk the tree backwards (as far as I know).

However, we might be able to find a reasonable interim solution. For example, we could stash the full path the user has requested in context. Bitswap could check for this request metadata and use it to make better decisions.

Thoughts?

By "this information" do you mean the CIDs that are being provided by our node? Then this would be the list we'd reprovide from? What happens if the IPFS node does not have the experimental DB features enabled?

We should be able to use the datastore. While flatfs won't scale, the default configuration doesn't actually use flatfs for the entire datastore. Instead, it uses flatfs for blocks (everything under the /blocks/ prefix and leveldb for everything else).

I believe there's an unspecified step above, where we actually put CIDs into the "initial provide queue" in addition to the database, if it doesn't already exist there. Is that true?

Yes.

By "continuously walking over the set of things", do you mean that we are just constantly reproviding? As in, when we get to the end of the list, we go back to the beginning? Or is the reprovide worker being fed by some other process that decides when things should be reprovided? Or do we store some data related to reproviding and then, as we are continually going through the list of CIDs to reprovide, we skip based on some criteria (like some amount of time passing)?

Kind of. My point was that we shouldn't provide en-mass every 12 hours (the reprovide interval). Instead, we should try to spread out reproviding over a 12 hour period (such that we send out individual provider records every 12 hours on average). However, that's really just an optimization that we can tackle later.

Do different, unrelated IPFS nodes need to provide the same dag of blocks with the same strategy? Or does the node fetching the blocks provide based on the strategy that node is configured with?

Until we augment bitswap to send this information, each node will have to use it's own strategy. Hopefully, we can pick some reasonable defaults such that users don't need to pick their own strategy...

@Stebalien
Copy link
Member

Stebalien commented Dec 14, 2018

Replying to: #5840 (comment)

It seems like marking blocks as provided is a best-effort hint thing that we'd do to help in finding providers.

(mostly covered in my previous response but I figured I should reiterate this to be explicit)

The issue with this is, as you say:

Either way I think it's probably necessary to have a 'walk up the tree and ask until you find providers' fallback capability when fetching data is stalled on sources.

Unfortunately, doing that will mean storing a link database. We could do this with the datastore but I'm worried that won't be very performant (and will take quite a bit of space).


I wonder about within a node, tracking the providing status of individual blocks or keeping track of only roots (as specified by Provide(c cid.Cid)) and, maybe someday, their associated providing strategies.

So, we could track a root + a provider strategy however:

  1. Dealing with blocks shared between trees could be tricky.
  2. I'm not sure we really want to optimize that. If we're sending out provider records for a CID, ~16 peers in the DHT will have to store that provider record. If we can't handle remembering each and every block we're announcing to the network, we're announcing too many.

If providing is starting at a root and writing to a queue with a strategy, that could be the same as iterating through all known "provide" roots and running the same logic on reprovide.

To reiterate, my idea was to apply the provider strategy when we add the blocks to the local datastore and then record the outcome. The reprovider would just blindly reprovide all recorded blocks (it wouldn't run any provider strategies).

@Stebalien
Copy link
Member

Replying to: #5840 (comment)

What if we store just the CIDs that we are providing? That means anywhere from 1 to ALL of the CIDs in your repo would be in the database and therefore reprovided. I think in this situation the strategy would be applied only when adding and then it's not needed again.

^^ this. This is what I was trying to suggest.

(I should have read this comment first and saved a lot of writing...)

@eingenito
Copy link
Contributor

eingenito commented Dec 14, 2018

@Stebalien and @michaelavila I've been thinking about it a bit, and talking with Michael and I'd like to make a case for storing roots instead of storing a provider flag for every block (or every block that should be provided). I don't think it's a slam dunk, but I think there are some benefits:

  • We store less data - I don't think this one is very important one way or the other. It might not actually be less data when we consider de-duping (mentioned later).
  • Providing and Reproviding become the same operation - When a call to provider.Provide(<CID>) is made we walk the tree based on a default strategy and add blocks that should be provided to a queue. When reproviding we look up roots in a database, and iteratively take the exact same action on each of them (with some complication around priority of providing vs reproviding - meaning we shouldn't starve providing with reproviding).
  • We store a record of the user's intent which offers us some different options when changing existing providing behavior.

The last part is complicated - here's a couple of examples:

  1. Imagine a user wants to change their default providing strategy on an existing node with a lot of content. They update their config to use the 'provide-10-percent-randomly' strategy. If we're storing roots we apply this new default strategy everywhere including reprovides for already added content. If we're storing per-node providing flags existing content continues to be provided as per the strategy in effect when content was added. There's options for the user like deleting and re-adding the content, but it's different.
  2. Imagine an ipfs provider unprovider <cid> command. What should the behavior of unprovide be on overlapping subtrees? Does it mean never provide any nodes under the node with this cid? Or is it more like pin rm. If we store roots you could unprovide one root and via another root some subtree of both could still get provided. And that might be appropriate - the user is saying they don't want to provide the content rooted at cid 'A' but they still want to provide the full content rooted at cid 'B'.
  3. Okay - getting even more speculative, what about an ability to specify provider strategy for a given tree. I apologize for the pathological example. If we store roots with an associated strategy we could easily change strategies for roots, or preserve specified strategies with overlapping subtrees on deletion.
    > ipfs add -q --provide=all -r bigdir/importantdir
    QmYtNZidBu3ASXYJ9N3jR1rvcPjexkQVP1B8uFfWgjK24t
    > ipfs add -q --provide=some -r bigdir
    QmY6yj1GsermExDXoosVE3aSPxdMNYr6aKuw3nA8LoWPRS
    # Intent is pretty clear but what should happen when
    > ipfs pin rm QmYtNZidBu3ASXYJ9N3jR1rvcPjexkQVP1B8uFfWgjK24t
    # with roots bigdir is provided with the 'some' strategy - maybe better?

What about dupes when providing overlapping trees?

This problem is worse when storing only roots - instead of running through all blocks visiting each one once we'd would be walking DAGs that may well overlap. But what about dupes in general regardless of if we're storing roots or provider flags for every block? How do we reliably maintain an interval between providings of a given block given reboots and adds and whatever else can happen, and how can we fairly reprovide blocks. There are options here, many of the comprehensive ones are the same across storing roots and storing flags per block.

If we were to go the route of storing 'last-provided' timestamps for every block then the first and second of my advantages for roots go out the window - but there is still some possible value in storing the users intent, particularly around overlapping content.

What about GC?

Is it about a wash? With flag per block we presumably delete flags as we delete blocks. For roots we could do the same (delete a root when we delete the corresponding block) or maybe clean up at the end, or maybe just read-repair when reproviding runs again?

@eingenito
Copy link
Contributor

eingenito commented Dec 14, 2018

One thing Michael brought up is that if we store roots then many providing operations become analogous to pinning operations. Like you'd be forbidden from blocking providing of a subtree that wasn't already a root - it would basically be covered by an indirect providing strategy. I guess oddly we could support additive providing directives, but not subtractive.

@Stebalien
Copy link
Member

Providing and Reproviding become the same operation - When a call to provider.Provide() is made we walk the tree based on a default strategy and add blocks that should be provided to a queue. When reproviding we look up roots in a database, and iteratively take the exact same action on each of them (with some complication around priority of providing vs reproviding - meaning we shouldn't starve providing with reproviding).

The difference between this version and the "mark every block proposal" is that the provider service would no longer be responsible for figuring out which blocks to announce. Instead, the user will have to apply some strategy up-front and explicitly tell this service which blocks to provide. Initial providing and reproviding should use the same logic (provide every marked block).


Regardless of what we do, I believe we'll need to mark every block we're providing, even if we also record the roots everywhere.

As you say, we need some way to avoid providing the same block repeatedly. With the "mark every block" strategy, this is pretty trivial because both leveldb and badgerdb (and any reasonable datastore we might want to use for this) support in-order traversal and seeking. That means we can just walk through our provider records in a loop, recording the index of the last block we provided and the time at which we provided it in case we restart.

If we need to recompute what we're providing every reprovide cycle, we'd need to somehow record (either in memory or on disk) the last time we provided each block. In-memory is probably a non-starter for large datasets and on-disk is going to be strictly worse than just putting this information on-disk in the first place.


However, I do agree that storing the user's intent could be useful for changing strategies. We could even refcount providers to make changing strategies easy. The tricky part here will be UX.

One thing Michael brought up is that if we store roots then many providing operations become analogous to pinning operations.

This may actually give us a reasonable way to handle that UX issue. I wonder if we can say that we only reprovide pinned content. We (may) send out provider records for unpinned content initially but we don't have to bother recording anything or reproviding in the future.

@eingenito
Copy link
Contributor

The difference between this version and the "mark every block proposal" is that the provider service would no longer be responsible for figuring out which blocks to announce. Instead, the user will have to apply some strategy up-front and explicitly tell this service which blocks to provide. Initial providing and reproviding should use the same logic (provide every marked block).

Mmm. I don't quite get this. I certainly understand that in the "mark every block" proposal we don't need to recompute which blocks to provide, possibly ever depending on what facilities we offer the user to 'unprovide' or alter providing strategies.

But we need a way to distinguish between an initial provide and subsequent reprovides because we want initial provides to happen asap. So there different ways we can do this. We'll trigger the initial provide on a call to Provide(cid) - that call (possibly later changed to Provide(cid, strategy) could 1) walk the DAG and apply a strategy which upon finding a node to provide will 2) persist that information, 3) enqueue the node in the initial provide queue.

Reprovide wouldn't be like super different, but it wouldn't walk DAGs, it would iterate over every 'provide this block' record and enqueue it in the reprovide queue.

Or, if we are recording the last provided time of each block, then the provide/reprovide logic is identical. Upon an explicit call to Provider.Provide(cid) we mark all the blocks rooted at that node using the appropriate (maybe default, maybe specified) strategy as needing providing, but as having never been provided. Then we invoke the thing that iterates over every 'provide this block' record and if a block needs providing but has never been provided we enqueue it with higher priority than the ones that have been provided before.

As you say, we need some way to avoid providing the same block repeatedly. With the "mark every block" strategy, this is pretty trivial because both leveldb and badgerdb (and any reasonable datastore we might want to use for this) support in-order traversal and seeking. That means we can just walk through our provider records in a loop, recording the index of the last block we provided and the time at which we provided it in case we restart.

Yes, only in the 'mark every block' scheme we can totally do that. What does the timestamp gets us? Is it that if that time is > the reprovide interval we can know that we have to immediately re-reprovide all nodes < the high water mark once we complete > the high water mark?

If we need to recompute what we're providing every reprovide cycle, we'd need to somehow record (either in memory or on disk) the last time we provided each block. In-memory is probably a non-starter for large datasets and on-disk is going to be strictly worse than just putting this information on-disk in the first place.

Agreed - I think this might be a good approach to proper reproviding no matter what.

However, I do agree that storing the user's intent could be useful for changing strategies. We could even refcount providers to make changing strategies easy.

Interesting - I'd like to understand this. Assuming some providing strategies make decisions based on the shape of a DAG as long as we know the roots we can update strategies by rewalking their subtrees. But I don't know how the refcount helps us.

This may actually give us a reasonable way to handle that UX issue. I wonder if we can say that we only reprovide pinned content. We (may) send out provider records for unpinned content initially but we don't have to bother recording anything or reproviding in the future.

We wondered this too.

I can see a big advantage to the 'mark every block' approach when calculating nodes to provide is expensive. Like with Kuba's scheme where you have to analyze an entire tree to know which nodes to serve.

We (meaning Michael) will move forward with marking blocks and discarding roots and we can bolt on remembering roots and strategies if we need it.

@eingenito
Copy link
Contributor

I just got the refcount thing. That makes sense.

@michaelavila
Copy link
Contributor Author

michaelavila commented Dec 17, 2018

TL;DR At some point, as the content of our repo changes out from under us (via things like gc) we have to update the bookkeeping we've done for providing. If we don't, we will never stop announcing that we are providing anything. So Unprovide will be a thing, I think, unless I'm missing something. And if it is, then we need to deal with unproviding subdags that are also provided by 1 or more parents.


I think there's one design decision that's sort of fundamental to the work being done right now and that's at (or near?) the root of what's being discussed: What do we do when unproviding a CID that is a subdag of some other CID that is itself being provided? I think we need to either:

  1. Unprovide it, the user is telling us they don't want to provide a piece of content, assume they really truly don't want to provide it at all. If we do this, then recording roots potentially presents some problems related to representing an arbitrary part of the dag. If unproviding is only really a result of gc, then maybe this reasoning doesn't make sense.

  2. Keep track of enough information for each CID being provided so that we only really stop providing when all parents and the CID itself (assuming it was provided as a root) have been unprovided. There are multiple ways we can solve this, the record roots + strategies being one of them. Another being to record each CID along with the root it was provided from. As @eingenito has pointed out, this isn't that different from recording the roots + strategies ... strategy, but it does have some nice characteristics of both of the solutions being discussed.

Why do we have to figure this out though? It doesn't matter that the user can't themselves unprovide, we are changing the nature of provide. Before, providing/reproviding was based on whether the content was actually in your local repo: initial providing happened as bitswap was made aware of the blocks and reproviding worked by fetching all of the blocks in the repo. Your local repo was maintained with things like gc so that stuff would get removed from the repo and would eventually no longer be reprovided. Now, providing/reproviding will be based on some bookkeeping we are doing independent of the contents of the repo. So, not only do we have to remove content from the repo we have to update our bookkeeping so that we stop sending out provide announcements (aka unprovide).

So Unprovide will be a thing, I think, unless I'm missing something. And if it is, then we need to deal with unproviding subdags that are also provided by 1 or more parents.

@eingenito
Copy link
Contributor

Since right now there's no way to unprovide anything maybe block deletion can also delete the corresponding per-block provide record. Or we could delete orphaned provide records when we reprovide if we don't want to slow down GC any.

Wouldn't that basically maintain the current behaivior?

I guess one complication is that if we're queuing block ids to disk to provide later we might provide blocks that we no longer have - so we might need to check there?

@michaelavila
Copy link
Contributor Author

Sure, I concede. gc alone doesn't require us to handle the unproviding subdags that are also provided by 1 or more parents issue. I do think it's a question we should have answered if considering any nontrivial designs (anything more than simply keeping a list of every block to reprovide). Thoughts?

@eingenito
Copy link
Contributor

eingenito commented Dec 17, 2018

Does this make sense?

  • Start with an implementation that doesn't track roots or strategies. When data is added we run the configured default strategy against it and write out provide records in the database which we then use for providing and reproviding (obv.). This is a behavior change from the current implementation because right now you can change your reprovide strategy for already added files with a configuration change. Through some mechanism we won't advertise blocks we no longer have.

    Overlapping files don't really matter - if a block is ever marked as needing providing it'll be advertised to the DHT until it is deleted. If you re-add a block it's provide status will be recomputed by the active default strategy.

  • If we choose to we can offer the user an 'unprovide' command that simply unprovides every block under a given node at the time it is run; regardless of overlapping trees. If a subtree of the unprovided tree is added again later it gets provided according to the default strategy. 'unprovide' is like a point in time masking of a subtree wrt providing.

  • Later/Or, we can start tracking roots, and treat them like pins; with the same direct, recursive, or indirect behavior, and you can only modify direct and recursive pins. Then 'unprovide' can get fancier and still provide overlapping subtrees of other provided roots (if we think that's appropriate). And/Or we can think about building features like ipfs provide --strategy=sparse <cid>.

I think we can worry about the details of tracking roots, like @Stebalien's reference counting, later. It's going to get a little complicated, but I think it's doable, and it's a reasonable evolution of the initial behavior.

@Stebalien
Copy link
Member

@eingenito

Does this make sense?

Yes. That sounds like a good approach


Re: #5840 (comment)

The difference between this version and the "mark every block proposal" is that the provider service would no longer be responsible for figuring out which blocks to announce. Instead, the user will have to apply some strategy up-front and explicitly tell this service which blocks to provide. Initial providing and reproviding should use the same logic (provide every marked block).

But we need a way to distinguish between an initial provide and subsequent reprovides because we want initial provides to happen asap. So there different ways we can do this. We'll trigger the initial provide on a call to Provide(cid) - that call (possibly later changed to Provide(cid, strategy) could 1) walk the DAG and apply a strategy which upon finding a node to provide will 2) persist that information, 3) enqueue the node in the initial provide queue.

  1. You're right. The initial provide will have to iterate over queue while subsequent reprovides will just walk over every marked block.
  2. My thinking was to walk the DAG outside of the call to Provide(cid). The provider system wouldn't apply provider strategies. Instead, the subsystem adding the data would tell the provider system which blocks to provide (explicitly).

As you say, we need some way to avoid providing the same block repeatedly. With the "mark every block" strategy, this is pretty trivial because both leveldb and badgerdb (and any reasonable datastore we might want to use for this) support in-order traversal and seeking. That means we can just walk through our provider records in a loop, recording the index of the last block we provided and the time at which we provided it in case we restart.

Yes, only in the 'mark every block' scheme we can totally do that. What does the timestamp gets us? Is it that if that time is > the reprovide interval we can know that we have to immediately re-reprovide all nodes < the high water mark once we complete > the high water mark?

Mostly, it allows us to not reprovide everything if we know we haven't been offline for very long. We shouldn't even need any additional logic. We could just provide (NOW-LAST_PROVIDE)/12h * NUM_BLOCKS every 5 minutes. If we go offline for a bit, we'd just provide more blocks when we next come back online.


Re: #5840 (comment)

If we don't, we will never stop announcing that we are providing anything. So Unprovide will be a thing, I think, unless I'm missing something. And if it is, then we need to deal with unproviding subdags that are also provided by 1 or more parents.

We can handle this by calling Has on the blockstore for every block before providing it (removing the provider record if we no longer have it). We can also use some kind of event system but that may be overkill, has is pretty fast.

What do we do when unproviding a CID that is a subdag of some other CID that is itself being provided?

This is what I was trying to get at with the refcounting (effectively your option 2). Basically:

  1. Every pin would have an associated provider strategy.
  2. When crating the pin we'd apply the provider strategy and increment the "provider" refcount on the block.
  3. When removing a pin, we'd apply the provider strategy in reverse and decrement the provider refcount.

This assumes that we're not explicitly providing unpinned files/blocks (e.g., cached data fetched through bitswap). IMO, that's totally reasonable and is what most users will expect.

@michaelavila
Copy link
Contributor Author

@Stebalien re:

If we don't, we will never stop announcing that we are providing anything. So Unprovide will be a thing, I think, unless I'm missing something. And if it is, then we need to deal with unproviding subdags that are also provided by 1 or more parents.

We can handle this by calling Has on the blockstore for every block before providing it (removing the provider record if we no longer have it). We can also use some kind of event system but that may be overkill, has is pretty fast.

There is an issue where a block can be GC'd, then re-added before a reprovide. In that case, we would not announce immediately upon providing, but would continue to reprovide. Do you see that being an issue?

@Stebalien
Copy link
Member

That should be fine; really, I think that's what we want. In this case, we've already sent out a provider record for that block within the last reprovide period so there's no use sending out a new one.

@michaelavila
Copy link
Contributor Author

Ok, thanks.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
status/WIP This a Work In Progress
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants