-
Notifications
You must be signed in to change notification settings - Fork 3.9k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
SplitHTTP: More range options, change defaults, enforce maxUploadSize, fix querystring behavior #3603
Conversation
This reverts commit 2932fb9.
我记得 cloudflare 免费版支持上传 100MB,但我们的服务器通常承载不了 100MB*100,所以服务端也应当加上限制 |
其实这么想的话即使是 1MB*100 问题也挺大的,多几个 stream 就能把服务端打爆了。。。 |
服务端buffer满了直接关掉整个连接就行了 剩下的就是普通的七层ddos了 |
对了你可以开一个 PR 把 path 后的参数移到新的 path 后吗,打算合并这些 PR 后先发个 v1.8.22,然后再加 multiplex control |
translator did not catch this, do you want to make these things part of the querystring like |
it was more convenient to make all the pending changes in the same PR. I updated the PR description and title to reflect this. I'm starting to loose track again of what other changes are needed. Let's see:
this is how it works since 1.8.16, but before this PR the buffer was only constrained by packet count, not by total size in bytes. |
@@ -75,7 +75,7 @@ func (h *requestHandler) upsertSession(sessionId string) *httpSession { | |||
} | |||
|
|||
s := &httpSession{ | |||
uploadQueue: NewUploadQueue(int(2 * h.ln.config.GetNormalizedMaxConcurrentUploads())), |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
when changing it, i realized the server is already allowing for 2 * maxConcurrentUploads
implicitly. I changed it so that the limit can be set more directly, and doubled the default instead. I think this 2 *
was only supposed to be a temporary hack at some point.
可以先不实现
比如说, 现在 完成这个后我会合并这个 PR 然后发 v1.8.22,在此之前大家不要往 main push 新的 commit |
done |
再合并一些 PR 后我会发个新版,此外下下个版本需要给服务端加两个选项:
|
|
oklen直接写死在一个随机范围好了 做选项好像意义不大的样子(( |
不允许自定义的话始终是固定靶,我想的是允许自定义且有个默认的随机范围,客户端要求 v1.8.21+,总之这个问题还需要些研究 |
} | ||
} else { | ||
return RandRangeConfig{ | ||
From: 100, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The default value is changed from 10 to 100, really?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think the justification for this is that minUploadInterval=30
should prevent this from being reached in practice. #3592 (comment) I also don't see this limit being reached at all in my tests (number of connections/streams is lower than 20 at all times), so I am also not convinced it needed to be raised by that much (or at all)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
你要这么想,本来我是想删掉这个选项的,不过它确实对服务端还有些用
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I also don't see this limit being reached at all in my tests (number of connections/streams is lower than 20 at all times), so I am also not convinced it needed to be raised by that much (or at all)
如果为了打游戏调成了 minUploadInterval=10
,number of concurrent streams 应该会变多,I guess
fixed |
|
左找右找没找到区别在哪
|
@Fangliding 文档有三处需要修改:
|
Unless the connection from the client is HTTP/1.1, the client will see a faulty response and should terminate the connection entirely. With HTTP/1.1, the request pipelining will behave this way: Piles up |
v1.8.22 已发布,正常情况下 CDN 不会丢一个 POST, |
@mmmray 麻烦研究一下,写一下 lenOK @Fangliding 文档 noSSEHeader 需标注仅服务端;下面“服务端行为”写成“客户端”了;四个 transports 的 headers 需标注仅客户端 |
给 response 塞入十万字的 cookie header 怎么样,之后 (s∞n) 可以改成 early data |
Since XTLS#3603 and XTLS#3611, iperf and speedtest are triggering "too large upload" by the server. This is because v2ray's MultiBuffer pipe can actually return data larger than the configured size limit. I'm surprised nobody noticed it so far. In principle, any heavy upload could disrupt the entire connection. In its infinite wisdom, speedtest.net hides such errors and only shows low upload instead. I only noticed this issue myself when inspecting server logs.
Since XTLS#3603 and XTLS#3611, iperf and speedtest are triggering "too large upload" by the server. This is because v2ray's MultiBuffer pipe can actually return data larger than the configured size limit. I'm surprised nobody noticed it so far. In principle, any heavy upload could disrupt the entire connection. In its infinite wisdom, speedtest.net hides such errors and only shows low upload instead. I only noticed this issue myself when inspecting server logs.
Since XTLS#3603 and XTLS#3611, iperf and speedtest are triggering "too large upload" by the server. This is because v2ray's MultiBuffer pipe can actually return data larger than the configured size limit. I'm surprised nobody noticed it so far. In principle, any heavy upload could disrupt the entire connection. In its infinite wisdom, speedtest.net hides such errors and only shows low upload instead. I only noticed this issue myself when inspecting server logs.
…, fix querystring behavior (XTLS#3603) * maxUploadSize and maxConcurrentUploads can now be ranges on the client * maxUploadSize is now enforced on the server * the default of maxUploadSize is 2MB on the server, and 1MB on the client * the default of maxConcurrentUploads is 200 on the server, and 100 on the client * ranges on the server are treated as a single number. if server is configured as `"1-2"`, server will enforce `2` * querystrings in `path` are now handled correctly
Follow up from #3592
"1-2"
, server will enforce2
path
are now handled correctly