-
Notifications
You must be signed in to change notification settings - Fork 3.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Exposes parts of RequestScheduler in the public API #8549
Conversation
Thanks for the pull request @sanjeetsuhag!
Reviewers, don't forget to make sure that:
|
@mramato @kring I would appreciate your thoughts on this PR. A long time ago we had I think having coarse grained controls over throttling is good, like We may need more a more descriptive summary and examples, but I want to make we actually want to expose |
I think the question we need to ask is, have we had a single reported issue or otherwise solid use case that would benefit from exposing these features? I honestly can't think of any which leads me to say there's no reason to expose these. I'm not against the idea, I'm just trying to think of it pragmatically. I also wonder, in the age of HTTP/2, if we need to revisit some of the assumptions of throttling in general (but that's probably outside the scope of this PR) |
I think requestsByServer is important because HTTP/2 servers will want a different throttle than HTTP/1 and theirs no way for us to automatically detect that (which is why this property exists to begin with). |
The main summary paragraph for RequestScheduler should probably be greatly expanded to explain why we have throttling at all and any other important aspects of our throttling system the dev needs to understand in order for them to be able to determine if they need to or should tweak the settings at all. Otherwise how would a dev know how our system reacts? |
I just want to spill some pixels on why we throttle, cause it's actually not that obvious, especially in the context of HTTP/2. In part just to make sure I understand it correctly. If our code made 100 requests at once over an HTTP/1.1 connection, the browser would (even without any of this request throttling code in Cesium) automatically throttle them and actually only do ~6-10 requests at a time. The exact number varies from browser to browser and has changed over time; I've lost track. In any case, over an HTTP/2 connection, it would possibly do all 100 requests simultaneously (but more efficiently than those 100 requests would be over HTTP/1.1 of course), or it might do a smaller number as negotiated between the client and server. So at first glance, it's counter-productive to do throttling in Cesium itself. Over HTTP/1.1 the browser is doing it itself anyway, and over HTTP/2 a higher number of simultaneous requests might be more performant so why cap it? But actually the main reason we do throttling is because we change our mind constantly about what we want to load. Any time the camera moves, all the requests we decided to load previously might become irrelevant and we now want to load entirely different ones. We don't want those old requests to slow down the new ones, either by occupying space in a queue ahead of them or by downloading simultaneously and making it take longer to receive the new data. So, our throttling code artificially constrains the number of requests in flight at once so that new requests need to wait in a shorter queue and have less competition for bandwidth. That is very necessary for Cesium to work well, no matter if we're on HTTP/1.1 or HTTP/2. Other things - notably Leaflet! - solve this problem in a different way. Instead of constraining the number of simultaneous requests, Leaflet does all requests simultaneously. And then when it decides a request is no longer relevant because the camera moves, it cancels that request. It cancels The downside to Leaflet's approach is that you're relying on the browser to be able to efficiently handle rapid-fire creation and canceling of requests. That's probably mostly fine in modern browsers. But you're also counting on the server and any other infrastructure between it and the client to be able to handle this efficiently, which is maybe more problematic. For example, servers that don't stop handling a request when the client disconnects (or cancels the request over HTTP/2) will be crushed by clients making potentially hundreds of simultaneous requests, followed by a hundred more 3 seconds later when the camera moves. At Data61 we also saw cases where something - maybe CloudFront - started breaking HTTP/2 connections with this rapid create/cancel behavior, possibly because it looks like a denial of service attack. See TerriaJS/terriajs#3744 TL;DR: we need throttling by server even with HTTP/2, unless we move to a cancel-instead-of-throttle model, and if we do that we might run into other issues. |
More on topic, this PR is fine by me. Giving users some control over throttling is a good idea. |
@sanjeetsuhag can you please update this PR so we can get it merged? Thanks! |
@hpinkos Updated the PR |
@kring that's interesting about leaflet. We observed something similar in Google Earth web when @loshjawrence started working on request prioritization for 3D Tiles. I think they've changed their strategy since. 3D Tiles does a mix of throttling and cancelling, but cancels happen infrequently. |
Thanks again for your contribution @sanjeetsuhag! No one has commented on this pull request in 30 days. Maintainers, can you review, merge or close to keep things tidy? I'm going to re-bump this in 30 days. If you'd like me to stop, just comment with |
@hpinkos Could you do a final review and merge? This fell off my radar for 1.68, but would like to get this in 1.69. |
Thanks @sanjeetsuhag! |
@kring speaking of "changing your mind", have you looked at |
Thanks @thw0rted, I wonder if that would lead - in some (broken?) environments - to the same problems we saw canceling requests at Data61. I haven't used AbortController / AbortSignal, but it sounds similar to canceling via other mechanisms. It would be interesting to experiment with, though. |
The Angular team have an interesting approach. They write an interface for their web request dispatcher public API, then use DI to provide a back-end for it at runtime. (Unfortunately I think the only one you can get right now is backed by XHR, but that's an Angular problem :D ) In theory you could have an XHR-backed implementation that you use today, that still throttles the number of connections initiated, and a I wouldn't have suggested it since it sounds like a lot of work, but in my experience, |
Fixes #8384