-
Notifications
You must be signed in to change notification settings - Fork 24
Identify the "big bugs" and "big optimizations" relevant for data.gov #105
Comments
Storage is one that everyone notices but in my optionion it isn't the limiting factor in this case. In my opinion we should focus on getting the performance for those big datasets in a range that is possible to use them at all. This includes adding, fetching, rechecking. Also I have no idea if our GC will run at all with such a number of keys (it might run out of memory). DHT is also other major problem that might make the go-ipfs choke even if you've got the disk space. Filestore would be nice space wise but it probably could be a sprint on its own (or not, depending on how cleanly we want to do it), and even if we got it there might other barriers for deploying this data into IPFS. |
Some notes i wrote down the other day:
|
@whyrusleeping and I added some notes here ipfs/notes#216 -- copied here data.gov
|
Relevant notes from the sprint planning call: Big Bugs & OptimizationsTODO: dig up diagram @whyrusleeping created
|
Marking the issue "Done" because we've identified the list, but we will still be using it as a reference. |
Make a short list of the "big bugs" and "big optimizations" relevant for this sprint. We'll want a good list to have in mind
ie. file attrs, bitswap supporting paths (kills so many RTTs)
The text was updated successfully, but these errors were encountered: