-
-
Notifications
You must be signed in to change notification settings - Fork 394
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Use WorkLimiter also for sending data #1192
Conversation
437a60c
to
d9ff10a
Compare
Seems like this uncovered a bug where the sometimes a limit of 0 was calculated if the cycle time in measuring mode was extremely high. That blocked all IO until the next measuring mode. I added one commit to fix that by allowing a minimum of 1 work item. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Seems straightforward enough. Prompted some non-blocking incidental thoughts.
quinn/src/endpoint.rs
Outdated
return Ok(false); | ||
} | ||
Poll::Ready(Err(e)) => { | ||
self.send_limiter.finish_cycle(); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
All these separate exits make me start leaning more towards @djc's favored Drop
guard approach for bounding cycle. Might be easy to miss one in a refactor.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Right - I didn't like the duplication too much either. Now I improved it a bit by breaking from the loop and having a common return place. Doesn't protect against someone acccidentally adding a return
later on, but provides some decent improvement for the amount of changes.
d9ff10a
to
a02291e
Compare
Without this, users of it will just be stuck and can never get work done. Also make sure a work item is calculated to take at least 1ns, to prevent a division by zero.
This adds time-based yielding to the send loop in the same fashion it had been previously added ot the receive loop. In my performance testing this didn't show a noticeable difference - likely because in the current benchmark the client is the bottleneck. But it should make things more deterministic.
a02291e
to
de85792
Compare
Thanks! |
This adds time-based yielding to the send loop in the same fashion it
had been previously added ot the receive loop.
In my performance testing this didn't show a noticeable difference - likely
because in the current benchmark the client is the bottleneck. But it should
make things more deterministic.