-
Notifications
You must be signed in to change notification settings - Fork 6
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
fix: lock source to ensure no double reads happen #4
Conversation
When the source is read very fast while the file descriptor is still being opened it can come to double reads of chunks of the file. This introduces a lock to avoid this issue.
Currently the tests for termination are failing, because the locking causes the abort only to be propagated after the previous read is done. I would appreciate some input on how to best solve this. |
Not a 100% sure this is the best solution, for the issues I'm seeing. Any feedback and ideas very welcome 💭 |
calling read in parallel is not a valid pull stream, this is a bug in the sink not this module. maybe what would help is a spec-stream you could drop in to debug the sink? |
@@ -161,11 +165,16 @@ module.exports = function(filename, opts) { | |||
readNext(cb); | |||
}; | |||
|
|||
var lockedSource = function (end, cb) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
the locking causes the abort only to be propagated after the previous read is done. I would appreciate some input on how to best solve this.
what happens if a termination signal skips the lock?
something like:
var lockedSource = function (end, cb) {
if (end) return source(end, cb)
lock('source', function (release) {
source(null, release(cb))
})
}
clarify: except you can abort out of term. but not do a valid read out of turn. |
@dominictarr thanks for pointing my nose into the right place, it was a bug in my code. Closing this PR. |
No problem! On Sep 12, 2016 21:18, "Friedel Ziegelmayer" notifications@github.com
|
When the source is read very fast while the file descriptor is still
being opened it can come to double reads of chunks of the file. This
introduces a lock to avoid this issue.