Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Request Scheduler from 3d-tiles #3476

Merged
merged 15 commits into from
Jun 9, 2017
Merged

Request Scheduler from 3d-tiles #3476

merged 15 commits into from
Jun 9, 2017

Conversation

lilleyse
Copy link
Contributor

@lilleyse lilleyse commented Jan 27, 2016

For #3241

Bringing in the Request Scheduler into master.

I also brought in TileBoundingVolume and its sublcasses TileBoundingRegion, TileBoundingSphere, and TileOrientedBoundingBox into master because they are being used to calculate tile distance in GlobeSurfaceTile which gets passed to the request scheduler.

All the miscellaneous request functions now pass through the RequestScheduler.

TODO

  • Update CHANGES.md

CHANGES.md Outdated
@@ -8,6 +8,7 @@ Change Log
* Deprecated
* Deprecated `GroundPrimitive.geometryInstance`. It will be removed in 1.20. Use `GroundPrimitive.geometryInstances` instead.
* Deprecated `TileMapServiceImageryProvider`. It will be removed in 1.20. Use `createTileMapServiceImageryProvider` instead.
* Deprecated `throttleRequests` flag in `TerrainProvider`. It will be removed in 1.19. It is replaced by an optional `Request` object that stores information used to prioritize requests.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I just noticed that Request is still private. Perhaps let's not deprecate this until we make the RequestScheduler public. If you agree, update the 3D Tiles roadmap.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Or maybe Request should be public. Can we reasonably do that without making the entire request scheduler public since I don't think we want to commit to the full public API today.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@lilleyse what is the plan here? See #3448.

Also, move this to the 1.25 section of CHANGES.md.

@lilleyse
Copy link
Contributor Author

lilleyse commented Aug 9, 2016

@pjcozzi This is up-to-date with master and the new spec changes.

@pjcozzi
Copy link
Contributor

pjcozzi commented Aug 10, 2016

@lilleyse can you evaluate #4166 before we merge this?

@pjcozzi
Copy link
Contributor

pjcozzi commented Aug 10, 2016

Wow, congratulations on 100% code coverage in the new files.

The code all looks good. This just needs #4166 and #3448.

@mramato
Copy link
Contributor

mramato commented Aug 10, 2016

I may have brought this up before, but are we sure this is going to be a net win for us? It seems like we are trying to be smarter than the browser at its own game. I can understand prioritizing requests based on scene geometry (requests important tiles/imagery/terrain before further away stuff), but doing our own throttling seems like a really bad idea, especially with HTTP/2 adoption starting to happen. (where Cesium will end up a lot slower than it should be out of the box because no one is going to know to adjust these settings).

I also imagine this system breaks down if it is part of a larger app where requests are being made outside of the RequestScheduler (since the browser will end up throttling anyway).

I'm sure there's been a lot of thought and offline discussion put into this, and I don't mean to step on anyone's toes; I just want to understand the big picture here and be sure that this is a big enough win for us that it's worth the extra complexity.

@pjcozzi
Copy link
Contributor

pjcozzi commented Aug 10, 2016

We definitely need system-wide (cross terrain, imagery, 3D tilesets, etc.) priority for requests. We can't have, for example, terrain requests in the distance starving 3D building requests in the foreground.

As for throttling based on subdomains, I think it is fine to take this change with the same behavior as our current throttling. If I understand, this change correctly it just makes the throttling system-wide. Once whatever issues are worked out, I would be fine merging this re-evaluating as HTTP/2 spreads. We'll always want some kind of limit so, for example, we don't make 10K requests only to realize the next frame that we don't actually need any of them and instead want just a new 10 requests.

* @type {Number}
* @default 6
*/
RequestScheduler.maximumRequestsPerServer = 6;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

How did we choose this number, because that's what Chrome does? I believe other browsers have different values. If this number is really important, should we have some browser-specific test code in here to determine a better default?

@mramato
Copy link
Contributor

mramato commented Aug 10, 2016

We definitely need system-wide (cross terrain, imagery, 3D tilesets, etc.) priority for requests. We can't have, for example, terrain requests in the distance starving 3D building requests in the foreground.

I agree, but I would really like to see some hard numbers in a variety of benchmarks across all of the major browsers comparing this branch and master to prove we aren't making things worse. When it comes to performance, we've always been about hard numbers in the past, no reason to stop now.

@pjcozzi
Copy link
Contributor

pjcozzi commented Aug 13, 2016

Yes, I'd like to see some numbers too. This won't be better than master, but it should not be non-trivially worse.

@mramato
Copy link
Contributor

mramato commented May 8, 2017

@lilleyse should this just be closed? Or do you plan on reworking it before opening the 3d-tiles PR?

@lilleyse
Copy link
Contributor Author

lilleyse commented May 9, 2017

@lilleyse should this just be closed? Or do you plan on reworking it before opening the 3d-tiles PR?

I plan on reworking it with @austinEng's help, but will open the 3d-tiles PR before hand and link back to here. It can stay open for now.

@lilleyse lilleyse mentioned this pull request May 10, 2017
23 tasks
@lilleyse lilleyse force-pushed the request-scheduler-master branch 3 times, most recently from 00d6f25 to 917c548 Compare May 19, 2017 21:53
@lilleyse
Copy link
Contributor Author

This is almost ready but I still need to add tests and do benchmarks.

I worked from a bunch of the ideas here (#5317). Per-frame budgets are now gone so the problems when using multiple widgets #5226 should be fixed.

A request now contains a state enum:

    var RequestState = {
        UNISSUED : 0,   // Initial unissued state.
        ISSUED : 1,     // Issued but not yet active. Will become active when open slots are available.
        ACTIVE : 2,     // Actual http request has been sent.
        DONE : 3,       // Request completed successfully.
        CANCELLED : 4,  // Request was cancelled, either explicitly or automatically because of low priority.
        FAILED : 5      // Request failed.
    };

When terrain/imagery/3d-tiles makes a request, a few things can happen:

  • Returns undefined instead of a promise if too many requests are active. Same behavior as before.
  • Returns a promise
    • Promise is resolved when the XMLHttpRequest is complete
    • Promise is rejected if the request was cancelled. This can happen if the request is explicitly cancelled (only in 3d-tiles traversal right now) or the request was kicked off the requestHeap before the xhr was sent.
      • TileTerrain and ImageryLayer check for cancelled requests and reset the state of the tile/imagery so that it can request again.

Other changes:

  • RequestScheduler.request and RequestScheduler.schedule are now the same thing and just called RequestScheduler.request. A request may be marked as throttled, meaning it will be put through the requestHeap and wait an open slot in activeRequests. It may be cancelled if other requests come in that are higher priority. Requests that are not throttled, which is almost everything except terrain/imagery, go through immediately.
  • Storing the xhr object in the Request object became really complicated with the old syntax. Now requests do not go through RequestScheduler directly. Instead loadWithXhr, loadImage, loadJsonp, and subordinates call RequestScheduler internally. Each of these functions takes a Request object as its last parameter. This means there are a lot less changes throughout the codebase.
  • More breaking changes - requestTileGeometry no longer has a throttle argument, this is replaced by a Request.
  • I'm not 100% solid on my changes in GoogleEarthEnterpriseMetadata. I may need to discuss with @tfili about this.
  • I removed throttling by subdomain, but may add this back.

If anyone wants to give this a quick sanity-check that would be nice, but otherwise I still have more to do to get this ready and will update then.

@pjcozzi
Copy link
Contributor

pjcozzi commented May 23, 2017

This is almost ready but I still need to add tests and do benchmarks.

As part of this, can you please spot check a few representative views for terrain/imagery in this branch vs master to verify the number and order of requests are about the same?

@pjcozzi
Copy link
Contributor

pjcozzi commented May 23, 2017

DONE : 3, // Request completed successfully.

RECEIVED is probably better semantics and more consistent with other parts of Cesium.

@pjcozzi
Copy link
Contributor

pjcozzi commented May 23, 2017

RequestScheduler.request and RequestScheduler.schedule are now the same thing and just called RequestScheduler.request

Why not remove schedule?

@pjcozzi
Copy link
Contributor

pjcozzi commented May 23, 2017

I removed throttling by subdomain, but may add this back.

Seems like we would want this.

@pjcozzi
Copy link
Contributor

pjcozzi commented May 23, 2017

Added a TODO section to the first comment in this PR.

@pjcozzi
Copy link
Contributor

pjcozzi commented May 23, 2017

What's the plan with this w.r.t to HTTP2 and #5316?

@lilleyse lilleyse force-pushed the request-scheduler-master branch from 6ecb6f6 to b2ca18b Compare June 5, 2017 22:13
@lilleyse lilleyse force-pushed the request-scheduler-master branch from b2ca18b to 8d698e9 Compare June 6, 2017 01:12
@@ -136,6 +136,8 @@

window.it = function(description, f, timeout, categories) {
originalIt(description, function(done) {
// Clear the RequestScheduler before for each test in case requests are still active from previous tests
Cesium.RequestScheduler.clearForSpecs();
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is this still needed here? I would prefer to remove it unless there's a really good reason for it.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'll see what I can do, because I don't like this either. Maybe this just needs to be called for specs that are doing more fine-grained request checking.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Right, it should only matter for tests that are testing the RequestScheduler itself.

Copy link
Contributor Author

@lilleyse lilleyse Jun 6, 2017

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There could be cases where unrelated tests do not resolve their promises correctly so the activeRequests list can get clogged. But since this only affects throttled requests the clearForSpecs call only need to go in terrain/imagery provider tests.

@lilleyse
Copy link
Contributor Author

lilleyse commented Jun 6, 2017

Results on release build of master. All tests disable the browser's cache. The HTTP2 server I'm using hosts both terrain and imagery at the same server, which is not ideal for throttling.

A few takeaways here:

  • Throttling works better for slower connections. When using 4G speeds, throttling by 6 per server is a big win especially in the Zoom case. However when using normal speeds, throttling can hurt performance slightly but usually results in less requests overall.
  • The http2 results are consistent with above, but visually there is a difference. Without throttling, all the requests come in at the same time so there are these large sweeps where the terrain refines. With throttling, the loading is more incremental and looks better to me.
Results

Master | HTTP2 server | no-throttle | Static
    4.86 seconds | 449 requests
    5.07 seconds | 466 requests
    5.04 seconds | 445 requests
    5.26 seconds | 454 requests

Master | HTTP2 server | throttle 6 per-domain | Static
    5.41 seconds | 346 requests
    5.10 seconds | 351 requests
    5.46 seconds | 349 requests
    5.84 seconds | 348 requests

Master | HTTP1 server | no-throttle | Static
    2.03 seconds | 524 requests
    2.19 seconds | 524 requests
    2.38 seconds | 524 requests
    2.70 seconds | 524 requests
    2.98 seconds | 524 requests
    2.98 seconds | 524 requests
    2.64 seconds | 524 requests
    2.73 seconds | 524 requests

Master | HTTP1 server | throttle 6 per-domain | Static
    2.22 seconds | 482 requests
    3.58 seconds | 509 requests
    3.20 seconds | 506 requests
    3.15 seconds | 490 requests
    2.95 seconds | 508 requests
    2.88 seconds | 481 requests
    2.27 seconds | 471 requests
    3.00 seconds | 492 requests

Master | HTTP1 server | no-throttle | Zoom
    3.50 seconds | 825 requests
    3.78 seconds | 793 requests
    3.79 seconds | 776 requests

Master | HTTP1 server | throttle 6 per-domain | Zoom
    3.59 seconds | 802 requests
    3.83 seconds | 750 requests
    3.92 seconds | 750 requests
    3.63 seconds | 821 requests

=======================================

Master | HTTP1 server | no-throttle | Static | 4G
    21.24 seconds | 556 requests
    20.81 seconds | 540 requests

Master | HTTP1 server | throttle 6 per-domain | Static | 4G
    13.41 seconds | 349 requests
    13.69 seconds | 353 requests
    13.45 seconds | 348 requests

Master | HTTP1 server | no-throttle | Zoom | 4G
    33.02 seconds | 1022 requests
    31.89 seconds | 998 requests

Master | HTTP1 server | throttle 6 per-domain | Zoom | 4G
    9.68 seconds | 296 requests
    7.60 seconds | 270 requests

Master | HTTP2 server | no-throttle | Static | 4G
    16.53 seconds | 447 requests
    17.87 seconds | 476 requests

Master | HTTP2 server | throttle 6 per-domain | Static | 4G
    12.52 seconds | 331 requests
    13.39 seconds | 349 requests

Will have more stats for this branch to come.

@lilleyse
Copy link
Contributor Author

lilleyse commented Jun 6, 2017

I removed throttling by subdomain, but may add this back.

What does this mean? We no are foo.server.com and bar.server.com considered the same thing? If so, I would suggest defaulting the max requests to 10-12 instead of 6, to cover this use case.

What I meant to say was server, not subdomain. foo.server.com and bar.server.com are treated separately. I added server throttling back since that comment.

@lilleyse
Copy link
Contributor Author

lilleyse commented Jun 6, 2017

What's the plan with this w.r.t to HTTP2 and #5316?

As of now these PRs are separate and don't need to interfere with each other.

@lilleyse lilleyse force-pushed the request-scheduler-master branch from 84a3c1e to a2dae0c Compare June 7, 2017 21:24
@lilleyse lilleyse force-pushed the request-scheduler-master branch from a2dae0c to b9e8042 Compare June 7, 2017 22:08
@lilleyse
Copy link
Contributor Author

lilleyse commented Jun 8, 2017

The code is ready to review again.

My internet connection is not super reliable here so I will hold off on final benchmarks until tomorrow.

@lilleyse
Copy link
Contributor Author

lilleyse commented Jun 8, 2017

Stats on this branch. New takeaways:

  • Unthrottled tests are roughly equal to master in terms of speed and number of requests.
  • Throttled/prioritized tests perform similar to master and sometimes worse. I'm not too surprised by this since the throttling is still 6-per-server like master but there is an added overhead in cancelling low-priority requests. The best improvements are in the slow-connection "zoom" tests.
  • Perceptually, the prioritization helps a lot in loading high detail stuff that is closest to you.
  • Testing is not super reliable since there is a bit of a luck factor involved. I often see variance of half a second or more between runs. Sometimes when I re-run a test later I see much better results. I also see wide variations in number of tiles requested.
  • Why are the # of requests for the unthrottled tests different on this branch than master? I'm not sure, but running master today I see the same numbers.
Results on request-scheduler-master

New | HTTP2 server | no-throttle | Static
    5.59 seconds | 513 requests
    6.09 seconds | 489 requests
    5.33 seconds | 478 requests
    5.78 seconds | 504 requests

New | HTTP2 server | throttle 6 per-domain | Static
    4.94 seconds | 319 requests
    4.78 seconds | 330 requests
    4.89 seconds | 332 requests
    4.93 seconds | 323 requests

New | HTTP1 server | no-throttle | Static
    2.89 seconds | 594 requests
    2.74 seconds | 594 requests
    2.73 seconds | 594 requests
    2.61 seconds | 594 requests
    2.79 seconds | 594 requests
    2.56 seconds | 594 requests

New | HTTP1 server | throttle 6 per-domain | Static
    2.74 seconds | 527 requests
    2.89 seconds | 540 requests
    3.03 seconds | 554 requests
    2.91 seconds | 557 requests
    3.04 seconds | 543 requests

New | HTTP1 server | no-throttle | Zoom
    4.14 seconds | 793 requests
    4.13 seconds | 784 requests
    4.18 seconds | 782 requests

New | HTTP1 server | throttle 6 per-domain | Zoom
    3.86 seconds | 712 requests
    3.91 seconds | 754 requests
    3.89 seconds | 701 requests

==========================

New | HTTP1 server | no-throttle | Static | 4G
    24.12 seconds | 624 requests
    24.11 seconds | 624 requests

New | HTTP1 server | throttle 6 per-domain | Static | 4G
    15.04 seconds | 381 requests
    15.37 seconds | 391 requests
    15.34 seconds | 389 requests

New | HTTP1 server | no-throttle | Zoom | 4G
    32.83 seconds | 1026 requests
    35.21 seconds | 1082 requests

New | HTTP1 server | throttle 6 per-domain | Zoom | 4G
    7.40 seconds | 273 requests
    7.48 seconds | 276 requests

New | HTTP2 server | no-throttle | Static | 4G
    16.52 seconds | 446 requests
    17.12 seconds | 460 requests

New | HTTP2 server | throttle 6 per-domain | Static | 4G
    12.57 seconds | 324 requests
    12.08 seconds | 309 requests

@lilleyse
Copy link
Contributor Author

lilleyse commented Jun 8, 2017

These are all 4G tests just to show the contrast better.

Static view - throttled - master vs. new.
oie_oie_overlay 1

Static view - not throttled - master vs. new.
oie_oie_overlay

Zoom view - throttled - master vs. new.
oie_oie_overlay 3

Zoom view - not throttled - master vs. new.
oie_oie_overlay 2

@lilleyse
Copy link
Contributor Author

lilleyse commented Jun 8, 2017

The first screenshot shows the benefits the most. My overall impression is that prioritization isn't a huge win for the terrain/imagery system alone since the default traversal is already pretty good. The real benefit will come when we have multiple data sources with different traversal patterns - aka 3D Tiles on the globe.

@pjcozzi
Copy link
Contributor

pjcozzi commented Jun 9, 2017

@lilleyse this all sounds good to me for now, just leave the while (true) if nothing else is cleaner.

We can do more research as we optimize 3D Tiles over time.

@mramato I'd like to merge this soon. Do you want to look at it again?

@lilleyse
Copy link
Contributor Author

lilleyse commented Jun 9, 2017

while (true)

I replaced it with a bool, still the same logic though.

@mramato
Copy link
Contributor

mramato commented Jun 9, 2017

@mramato I'd like to merge this soon. Do you want to look at it again?

I trust you guys, feel free to merge (unless you are asking because you don't have the time for another once over).

@lilleyse merge in master.

@pjcozzi pjcozzi merged commit 6a8c7c2 into master Jun 9, 2017
@pjcozzi pjcozzi deleted the request-scheduler-master branch June 9, 2017 20:10
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants