-
Notifications
You must be signed in to change notification settings - Fork 984
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
protocols/mdns: Optimise InterfaceState::poll
for low latency
#2939
Conversation
This ensures we do as much work as possible before we need to get polled again.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for looking into this.
I don't yet understand what the issue was. Was self.timeout
not polled early on and thus no waker registered?
self.send_buffer.push_back(build_query()); | ||
} | ||
|
||
// 2nd priority: Keep local buffers small: Empty buffers ASAP. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Shouldn't we first try to send existing work (self.send_buffer.pop_front
) and then add more work (self.send_buffer.push_back
)? In case we do a push_back
, we can do a continue
to follow with a pop_front
.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I am not sure if it makes a big difference. The timer is our only local work, i.e. nothing else will push to the buffer. When we say "prefer local work over remote work", does that not imply that whatever generates the local work has the highest priority, followed by trying to send it to the remote?
The issue was that we pushed into |
It is easier to understand this function if all buffer manipulations happen in here instead of in helper functions.
This results in better encapsulation.
This allows for better unit testing. Not that we have tests for this at the moment but in general 😊
@mxinden I addressed your comments and also went on a bit of a refactoring detour because I found some of the code hard to understand. I hope it is better now 😊 You should be able to review patch-by-patch! |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thank you @thomaseizinger.
That helped. Thanks. |
Description
This is now in accordance with our coding guidelines. Inspired by @gallegogt's findings.
Links to any relevant issues
Open Questions
Change checklist
I have added tests that prove my fix is effective or that my feature works