You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Describe the bug
Users tend to use MAVLink over all sorts of links with all sorts of implementations. While integrating a certain method of forwarding messages I realized that the _mav_finalize_message_chan_send function that gets automatically called when sending any telemetry or acks sends the message internally with three write operations on the UART device to save RAM by not copying the <256 byte payload to a buffer. So it sends
header
payload
checksum
(signature)
with independent writes and @dagar pointed out in the dev call it actually checks if the outgoing buffer has enough space for each write independently which means it could easily be that just the header gets sent and the rest is skipped when the link is congested.
When trying to forward the outgoing traffic to something else than the UART, interleaving it with a different protocol's packets or wrapping it in a custom layer it's strictly not possible to grab the per-message output of the PX4 MAVLink stream without clobbering the autogenerated MAVLink c_library or the sending routine of every single message. I'm also assuming that depending on the OS/RTOS sending/receiving UART bursts of fragmented message part is not the most efficient way to handle DMA and interrupts.
Expected behavior
IMHO a MAVLink message should be sent by first checking if the link buffer still has the capacity, if yes filling a send buffer with the complete message and then issue one write with that send buffer.
Describe the bug
Users tend to use MAVLink over all sorts of links with all sorts of implementations. While integrating a certain method of forwarding messages I realized that the
_mav_finalize_message_chan_send
function that gets automatically called when sending any telemetry or acks sends the message internally with three write operations on the UART device to save RAM by not copying the <256 byte payload to a buffer. So it sendswith independent writes and @dagar pointed out in the dev call it actually checks if the outgoing buffer has enough space for each write independently which means it could easily be that just the header gets sent and the rest is skipped when the link is congested.
When trying to forward the outgoing traffic to something else than the UART, interleaving it with a different protocol's packets or wrapping it in a custom layer it's strictly not possible to grab the per-message output of the PX4 MAVLink stream without clobbering the autogenerated MAVLink c_library or the sending routine of every single message. I'm also assuming that depending on the OS/RTOS sending/receiving UART bursts of fragmented message part is not the most efficient way to handle DMA and interrupts.
Code references
Example of how sending a message is typically triggered in PX4: https://github.com/PX4/Firmware/blob/d4f984fea5079b4a2a7ec086df8cdb8c47583027/src/modules/mavlink/mavlink_messages.cpp#L370-L371
The helper this "convenience function" calls:
https://github.com/mavlink/c_library_v2/blob/ac40c0329e88b70ae5db4c1467ed5853d305af54/common/mavlink_msg_heartbeat.h#L198
The 3 or more write calls the helper function initiates:
https://github.com/mavlink/c_library_v2/blob/ac40c0329e88b70ae5db4c1467ed5853d305af54/mavlink_helpers.h#L353-L355
Expected behavior
IMHO a MAVLink message should be sent by first checking if the link buffer still has the capacity, if yes filling a send buffer with the complete message and then issue one write with that send buffer.
Something like how the MAVSDK is doing it but optimized to work with fewer resources: https://github.com/mavlink/MAVSDK/blob/14056858a03432d9177f7039581e1fc7faf5e21c/src/core/udp_connection.cpp#L155-L160
Additional context
with essential line https://github.com/PX4/Firmware/pull/10394/files#diff-b168380964a16cd9b42995acc531f1e5R110
FYI @dagar @nicovanduijn @julianoes @jkflying
The text was updated successfully, but these errors were encountered: