-
Notifications
You must be signed in to change notification settings - Fork 1.2k
Upgrade the files branch (#323) to work with pull-streams #469
Conversation
Quick update: I managed to find the issue, it is a double read of the first chunk of the file, which happens due to a race condition inside |
) | ||
}) | ||
|
||
function handleError (err) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Might not be a great idea to create this function every time a request happens. And in general, handleError
should probably be somewhere global for the lib instead of specific to get.
Fix for pull-file is up: pull-stream/pull-file#4 |
@dignifiedquire awesome! Shall we use the fork meanwhile? |
@diasdavid still working out some issues :/ |
aca631a
to
72d35c0
Compare
@jbenet fixed all the things :D (except pull-file, that still requires the PR to be merged) |
Whoo a wild @jbenet appears and fixes my code 😹 The other issue I fixed in ipfs/js-ipfs-repo#85 |
72d35c0
to
06ee44f
Compare
@dignifiedquire could you describe how we went from: 'pull-file has a race condition that emits the same chunk twice' to 'putting a lock that buffers the whole block is a fix'? |
@@ -53,6 +53,7 @@ module.exports = function files (self) { | |||
pull( | |||
pull.values([hash]), | |||
pull.asyncMap(self._dagS.get.bind(self._dagS)), | |||
pull.take(1), |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can't these be replaced just with one call to dagService to get the DAGNode?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
it could, but I like it this way :P
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why would we have something in 3 lines when we can just use one? It adds unnecessary complexity, memory overhead and superfluous ops.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
no it doesn't add memory overhead, because the implementation of .get
in the dagService is literally the same thing
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
sorry, nvm what I'm doing here is stupid -.-
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is absurd. It's serious fancy abstraction creep. Careful
On Mon, Sep 12, 2016 at 10:53 AM Friedel Ziegelmayer <
notifications@github.com> wrote:
In src/core/ipfs/files.js
#469 (comment):@@ -53,6 +53,7 @@ module.exports = function files (self) {
pull(
pull.values([hash]),
pull.asyncMap(self._dagS.get.bind(self._dagS)),
pull.take(1),
sorry, nvm what I'm doing here is stupid -.-
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
https://github.com/ipfs/js-ipfs/pull/469/files/06ee44fb061954f440d6c80df94609400dc8f12e#r78388308,
or mute the thread
https://github.com/notifications/unsubscribe-auth/AAIcoW2OqPNNILjFBhda_sNfyhs_gxARks5qpWd6gaJpZM4J5Uvl
.
I filed a PR on pull-file as mentioned above, where @domenictarr was so nice to point out that calling the file stream twice without waiting for the previous cb to finish was a bug in the sink stream, not in pull-file, as pull-streams should always wait for the previous one to be finished. So I started to dig in and found that the reason why the source stream was called twice, was that we were not properly locking gets to the same keys in the blockstore (we were doing this before we transitioned to pull-streams, I simply forgot to put it back when doing the refactor). |
} | ||
|
||
const ipfs = request.server.app.ipfs | ||
// TODO: make pull-multipart |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Track this in a issue. @xicombd you probably would like to get that working, since you did the first multipart parser :)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@dignifiedquire interesting, so reading the same file twice causes an issue? That sounds weird. Also, CR'ed this PR |
No reusing the same stream to read the file can cause issues |
Add tests for making sure .cat shows right output and fix that test by using the right argument from cli. Ref: issue #476
@diasdavid lets squash this into a single commit when merging please. |
@dignifiedquire I've squashed and reworded, but not all into a single commit, because that loses the contributions from the different participants. That being said, thank you everyone @nginnever @noffle @victorbjelkholm @dignifiedquire @jbenet that took part in this PR, js-ipfs is at least 40% more awesome and robust just for having all of these tests passing :D thank you! 👏🎉👏👏🎉👏👏🎉👏👏🎉👏👏🎉👏👏🎉👏👏🎉👏👏🎉👏👏🎉👏👏🎉👏👏🎉👏👏🎉👏👏🎉👏👏🎉👏 |
History is important too. Squash responsibly
|
Some tests are still breaking, need to figure out why.
Needs:
TODO:
add
ontomaster
files get
actually stream the file content, using streams + size argshttp: files .cat streams a large file
http: files .get a large file
http: files .get directory