-
Notifications
You must be signed in to change notification settings - Fork 9
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[BUG]: DDS Multi Client Limited Bandwidth #20
Comments
I'm not sure this is the core of the problem. We run Zenoh in peer to peer mode. To have multiple clients we'd have to run it with a Zenoh server, I think. But please feel free to open a PR with the suggested change if it helped. |
I need to ask what 120 achieves here. Documentation at https://github.com/eclipse-cyclonedds/cyclonedds/blob/master/docs%2Fmanual%2Foptions.md#cycloneddsdomaindiscoverymaxautoparticipantindex suggests this is not a valid option. How did you come across this solution? |
@GPrathap said he uses this solution in another ROS2 project. |
interesting 🤔 @GPrathap want to comment? |
Hi @marc-hanheide, I am not entirely sure this resolves the issue, however, better to try and see. When I run the Rviz some of the messages' frequencies are dropping. After setting this value, it was fixed. Also, https://docs.ros.org/en/galactic/How-To-Guides/DDS-tuning.html |
yes, setting a fixed participant ID can theoretically help with performance, but 120 seems to be the first illegal value. So not sure why that's chosen. |
I don't think we need to tune the kernel network params here as we are not using DDS over the network really. DDS is only on the local loopback device, as only zenoh is used externally? |
Even in a single machine, you have to set |
I toyed around with the most simple setting CYCLONEDDS settings in LCAS/teaching#44 It appears to make it much more performant for me, but only started investigating |
I think LCAS/teaching#49 is actually the correct way to address this. We actually want multicast inside the container. It will be contained to the container anyway. |
Description of the bug
When two or more clients are both using rviz using zenoh bridge then topic messages drop.
Change from
<ParticipantIndex>auto</ParticipantIndex>
to<ParticipantIndex>120</ParticipantIndex>
.https://github.com/LCAS/limo_ros2/blob/humble/.devcontainer/setup-router.sh#L28C27-L28C31
@cooperj, @GPrathap
Steps To Reproduce
Multiple students connecting to same robot.
Additional Information
No response
The text was updated successfully, but these errors were encountered: