Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Problem with task.write() when not enough buffer free #154

Closed
gregor-anich-uibk opened this issue Feb 15, 2022 · 7 comments · Fixed by #156
Closed

Problem with task.write() when not enough buffer free #154

gregor-anich-uibk opened this issue Feb 15, 2022 · 7 comments · Fixed by #156

Comments

@gregor-anich-uibk
Copy link

gregor-anich-uibk commented Feb 15, 2022

Hello,

I believe I found a problem when writing data to a task. So long as there is enough buffer to take all the data passed to write() it's all fine, but then comes the point where not enough free buffer is left. Then nidaqmx calls (for example) DAQmxWriteDigitalU32 which will take as much data as possible and return how much was taken in samps_per_chan_written, but it will also return error code DAQmxErrors.SAMPLES_CAN_NOT_YET_BE_WRITTEN which in turn causes an exception to be raised so the user has no chance to retrieve samps_per_chan_written and it's game over.

I am not sure what would be a good way to fix this problem. A kind of easy fix with backwards compatibility would be to add the samps_per_chan_written information to the exception, or to not raise this specific exception since the user expects to get returned the number of samples written and if that is less than the amount of data passed to the function it is clear that not all samples could be written.

Edit: I think sometimes also DAQmxErrors.NO_MORE_SPACE can be returned, not sure if there are more possible error codes that should not raise an exception to fix this problem.

Greetings,
Gregor

@zhindes
Copy link
Collaborator

zhindes commented Feb 15, 2022

Wow, very nice observation. I like your idea of having that information in the exception. We could make a subclass of DaqError for this specific case that also includes the samps_per_chan_written data. That way in the general case, people will catch the DaqError and abort normally. For advanced applications that can handle that error more gracefully (which is rare, in my opinion), they could catch the more specific error and do something reasonable.

What do you think @maxxboehme ?

@zhindes
Copy link
Collaborator

zhindes commented Feb 16, 2022

It turns out the DAQmx C API doesn't correctly set samps_per_chan_written in error conditions. I'll fix that, but it won't be resolved until the next DAQmx release :(

@gregor-anich-uibk
Copy link
Author

gregor-anich-uibk commented Feb 16, 2022

Wow, man, thanks, finally! I suspect this since a long time and was just about to write an example in C to demonstrate this to NI. I needed all kind of strange strategies querying space_avail etc. to circumvent this problem. After my program ran the sum of the return from all write calls often did not match the total_samps_per_chan_generated, but if there is a bug in the C api that explains everything.

Can you estimate how long it will take until the changes made it into a new release of the DAQmx package?

Edit: removed rant (sorry)

@zhindes
Copy link
Collaborator

zhindes commented Feb 16, 2022

@gregor-anich-uibk - check out #156.

Can you estimate how long it will take until the changes made it into a new release of the DAQmx package?

It should get released in the next major version of DAQmx. That should be 21.8 in mid-late April.

@gregor-anich-uibk
Copy link
Author

gregor-anich-uibk commented Feb 16, 2022

Ok, that's not too bad. Can you tell me whether there is a reliable strategy for using the API to circumvent this bug in the meantime? And thanks a lot for the quick fixing of the other problems!

@zhindes
Copy link
Collaborator

zhindes commented Feb 16, 2022

You could query task.out_stream.curr_write_pos before and after the write. That should be incremented by the number of samples written. I just validated that in my test.

@gregor-anich-uibk
Copy link
Author

Ok thanks once more, that's what I was going to try next but it's nice that you confirm it should be fine.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging a pull request may close this issue.

2 participants