-
Notifications
You must be signed in to change notification settings - Fork 4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
SplitHTTP client: Add minUploadInterval #3592
Conversation
Add a new config option that allows to throttle the rate of upload requests. The new default behavior is to initiate a new upload request every 10 milliseconds max, before there was no limit. The old behavior can be restored by setting a lower value, or setting `-1`. Due to quirks in config parsing, `0` means the same as `10`. On my machine, `10` ms improves stability on splithttp+h2. It does not affect QUIC at all. More testing is needed. Since some CDN also bill by number of requests (on top of bytes transferred), the combination of `maxConcurrentUploads` and `maxUploadInterval` should also provide some more control over the CDN cost. (In theory, but I don't have that usecase)
如果我没理解错的话这个应当被称为 但还有 此外, |
Fixed the naming. Yep, this is what you proposed. 🙇♂️ I had it implemented for a while but didn't see it making a change on QUIC, then today I noticed it works well for h2. Maybe it's just my machine. Regarding
In conclusion, yes it's meaningless for saturating bandwidth, but there are so many brittle components along the path, they need to be protected from high concurrency. Maybe Range options, yes should be implemented. I will try to make it similar to fragment options. |
I've noticed significant discrepancies in upload speeds among users. Some experience rates as low as a few hundred Kbps, while others achieve tens of Mbps. |
@mmmray 我认为你说的这些都可以通过调整 就像我举的 RTT 为 300ms 的例子,根据这个 PR 现在的代码,基本上就是在每 300ms 中,前 100ms 每隔 10ms 上传一次,后 200ms 在阻塞,这远不如每隔 30ms 上传一次更高效(有可能累积了更多的数据)和稳定(延迟、速率等更平均) 所以应当默认把 |
oh, i finally understood. you're saying, if i first measure RTT or assume a certain value for it, then i can use |
@mmmray 不是让你去测量 RTT,而是举个 RRT 大于 100ms 时的具体例子让你意识到问题在哪。我并不想设定“一个 RTT 之内上传多少次”,而是仅根据 CDN、服务端的接受能力去设定 |
@mmmray 我认为影响我设定 |
I have adjusted I just realized that increasing I understand that setting Maybe it is better handled using MaxConns/MaxStreams setting somewhere lower in the HTTP stack, because at that point the "http packet" has already been created. However, |
无需分成两个选项,只需先把服务端的默认值改成 100,下下个版本把客户端的默认值也改成 100 即可
总之这些值应当均由服务端确定,客户端只能遵守不得逾越,就像 REALITY 服务端的 |
只能 drop 而不能延迟处理吗 |
我认为你可以另开一个 PR 处理
这样即可实现平稳升级,下下个版本再把客户端的默认值改为 100 |
I added the rand range options. Two caveats:
What else needs to be done for this PR? I think it is fine as-is, I only wanted to solve the upload performance concern.
is it necessary? I don't see a way to enforce this on the server without measuring more things, or adding a constant 30ms
|
I think I need to clarify "drop". If packets arrive out of order and the buffer fills up beyond It can be delayed instead but I think it will cause more issues if HTTP requests get "lost", as it makes the connection hang completely. In http/1.1 pipelining, the response is currently not read, so there is no ACK on any of the packets (or a retry mechanism) Anyway, I don't think it is necessary to reconsider this at the moment, it hasn't caused issues so far, and you have already come up with a solution to raise |
应当从这个 PR 中移除 |
这个逻辑改为“客户端检测到不是 |
对了,我准备下个版本给 path 加参数,我简单看了下代码旧版客户端不会去掉参数,所以也是不兼容的,要不直接就不兼容吧 |
about the ook vs ok feature detection, does it not mean that 1.8.22 server will not be compatible with 1.8.16 client? i think this breakage is happening too soon, 1.8.16 is still in some very important iOS clients. Maybe wait a bit, or |
it's not clear to me how to evolve the protocol right now in general, and there are too many "special-purpose" hacks. I think client and server may start need to send a version number, they can do it over query parameters:
of course, the protocol becomes very difficult to port to other cores if this sort of thing is done... |
actually this will not work that well, because the client may send POST sooner, if it needs to wait for the version to come from GET to determine behavior there is just more RTT |
直接把客户端的默认值也改为 100 吧,服务端比较好升级,仍然兼容以前的客户端 |
关于 |
ok
ok
in a future PR, maxConcurrentUploads is already 100 on the client, you want to go even higher on the server? |
|
想把这个 a 改成 the, |
卧槽,我还纳闷你咋把服务端的默认值设为了 200,翻译的锅,我的意思是 take the largest value within the range 而不是 take a larger value, |
I just realized that I may have misinterpreted that sentence then. Now the server defaults to 200 and the client to 100, but I think you said that the server should just take the upper end of the range, not have different defaults. Anyway, it doesn't seem to be a big difference.
EDIT: ...yes |
@RPRX please do not release for now, I think I found some issues. |
|
Add a new config option that allows to throttle the rate of upload requests. The new default behavior is to initiate a new upload request every 10 milliseconds max, before there was no limit.
The old behavior can be restored by setting a lower value, or setting
-1
. Due to quirks in config parsing,0
means the same as10
.On my machine,
10
ms improves stability on splithttp+h2. It does not affect QUIC at all. More testing is needed.Since some CDN also bill by number of requests (on top of bytes transferred), the combination of
maxConcurrentUploads
andmaxUploadInterval
should also provide some more control over the CDN cost. (In theory, but I don't have that usecase)