-
Notifications
You must be signed in to change notification settings - Fork 3
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
overreading #5
Comments
I would expect that it would give me the remainder and then drain/abort |
current behavior: pull = require('pull-stream')
Reader = require('pull-reader')
pull(
pull.values([new Buffer('hi')]),
reader = Reader()
)
reader.read(4, function(err, result){
console.log(err, result) // true, undefined
}) |
looking at the code what I think it will do is just fall through to ending. |
Yes the main case I was that I wanted to see that there was not as much data as expected, and report an error upstream with a message representing that. What currently happens is the stream terminates without any message. So I think there are two options:
I think the latter is more generally useful, and can allow the consumer to check if the remaining value was incomplete. This allows someone to use |
The current behaviour seems about right, I tried it by adding another read call:
and it is very similar to standard Node.js streams, where the design is, if you want to read X bytes, it will not return you any byte until it has X bytes buffered. It returned the bytes that it had (<X) then you would have to create a layer of caching yourself, because the stream can't emit the same data twice and doing this manually would cumbersome for wire protocols. Would knowing the buffer size help solve your case? |
@diasdavid
My assumption is that
im not sure how this is relevant?
my main goal here is better handling of edge cases where things did not behave as expected |
That is correct. That is also why you can't get a 'remainder' from a read operation as it is favourable that data only flows when the next action in the pipeline can do some action upon it.
If the
We might have a similar need, handling 'canceled operations' in JS is really hard because of the lack of context (like we have in Go), we've opened this issue https://github.com/ipfs/interface-ipfs-core/issues/58 and are currently exploring what is the best route. |
I see what you are saying about not being able to push data twice - it cant return the remainder and return an error (until the next read is requested). But imagine the case where there is no error, just the flow ends. pull(
pullFile('./data'),
reader
)
function readNextBytes(cb){
reader.read(1024, cb)
} Here we attempt to read chunks of 1kb of data at a time, but we lose the remainder when the file ends. Here are some ideas:
// when `end` is `true`, any remaining data is passed on in `data`
// if reading from a 1025 byte file, reader would return:
// (null, 1024 byte)
// (true, 1 byte)
reader.read(1024, function(end, data){
if (end instanceof Error) return handleError(end)
handleData(data)
if (end) allDone()
}
// when abort signal is received, next read returns remaining data, following read returns abort
// error still takes priority over remainder
// if reading from a 1025 byte file, reader would return:
// (null, 1024 byte)
// (null, 1 byte)
// (true, null)
reader.read(1024, function(end, data){
if (end instanceof Error) return handleError(end)
if (end) allDone()
if (data.length < 1024) return handleError( new Error('App Specific Error - Corrupted save fail') )
handleData(data)
} |
@kumavis I think you are correct. |
what is the expected behavior where
reader.read(size, cb)
is called where size > the remaining readable data?The text was updated successfully, but these errors were encountered: