-
Notifications
You must be signed in to change notification settings - Fork 56
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Review OffscreenCanvas, including ImageBitmapRenderingContext #141
Comments
Hey Justin, Do you happen to have an explainer doc somewhere for this feature? Or sample code that shows the use-cases and how this API addresses them? Regards |
Hi, the original proposal doc was part of the OffscreenCanvas feature proposal. That doc got cleaned-up after ImageBitmapRenderingContext landed in the whatwg standard (to avoid confusion with respect to landed vs unlanded parts of the spec). Here is the pre-cleanup revision of the OffscreenCanvas proposal, which includes ImageBitmapRenderingContext: https://wiki.whatwg.org/index.php?title=OffscreenCanvas&oldid=10087 |
Here is a basic test that shows how the API is used: |
We'd be happy to have more examples and an introduction section (cf. https://html.spec.whatwg.org/#introduction-13) in the spec. Last I talked I think you were waiting to do that until OffscreenCanvas also got integrated into the spec, but maybe there's value in doing it separately as well. |
Right. I'll take care of that |
The TAG has been looking into this at our face-to-face meeting just now. It seems like it might be helpful to see a little more of an explainer, with examples. (The best explainer we know of right now is from #141 (comment) .) We're also, at least so far, a little confused by the naming of some of the objects and methods here. It's not clear to me how the ImageBitmapRenderingContext part of the proposal relates to the transferToImageBitmap()/commit() part of the proposal. And there seems to be a good bit of concern about lack of things like requestAnimationFrame, and lack of ability to synchronize with audio/video. So, again, it would be good to see some more end-to-end examples of how this is to be used. |
Added an intro with an example: whatwg/html#2045 The example is a bit weak for now, there will be stronger examples once OffscreenCanvas is added.
OffscreenCanvas has two modes of operation: the ImageBitmap way, and the commit way. The idea is that you'd typically use one or the other (ImageBitmaps or commit) depending on the requirements of your use case. The commit way requires no script intervention on the main thread (the browsing context's event loop). You just call commit and the results are pushed to the display, this path is the most performant, and allows implementations to take all sort of shortcuts to reduce graphics update overhead. However, the commit flow does not allow commited frames to be synchronized with graphics updates of the surrounding page, which can be a problem. The association between the OffscrenCanvas and its placeholder canvas element is established when the OffscreenCanvas is created by calling transferControlToOffscreen() on the placeholder canvas. The other way to use OffscreenCanvas is to produce explicit frames by calling transferToImageBitmap. The resulting ImageBitmap object can then be postMessage'd back to the main event loop, where it can be taken in with ImageBitmapRenderingContext.transferFromImageBitmap(). Then the graphics update that reveals the new canvas content can be synchronized with other changes to the DOM.
requestAnimationFrame is on the way. Synchronization with audio/video cannot be done on a worker (without any major additions to the API) since those tags are not accessible in workers. However, by using the ImageBitmap flow (as opposed to commit), it would be possible to render in workers, and take care of synchronizing the graphics update with audio/video in the main event loop. But this realy is not something OffscreenCanvas is well suited for IMHO. |
Given #144, we're going to make this issue cover all of OffscreenCanvas. |
OK, we've just discussed this in a breakout at our face-to-face meeting. I think we're more comfortable with the two different modes of operation now, and why both modes are valuable (avoiding (maybe) the UI thread for speed vs. letting the UI thread control when the update happens to synchronize with other changes), although it again took us a while to step through both of them and understand how they work. There was a little bit of concern about the number of objects involved (and types in general), although we don't have any suggestions for how to reduce that. I think @travisleithead was a little concerned about why both I think we're pretty close to being able to close this issue, but curious first if you have any feedback on the above comments. |
@junov, curious if you have any feedback on #141 (comment) above -- particularly on whether there was any consideration of ways to reduce the number of objects involved here. It seems a little awkward, but we also didn't see any obvious alternatives. |
The current thinking is that the performance and latency of transferToImageBitmap / transferFromImageBitmap will be OK for multithreaded content which absolutely has to perform updates in sync with the DOM. We want to gain experience with both this and the commit()-based rendering style which is fully decoupled from DOM updates. I don't see a good way to change commit() to make it optionally, implicitly, sync up with DOM updates on the main thread. |
I feel like the problem with the current APIs are they ignore what's really happening under the hood. That canvas are just pairs of textures attached to framebuffers. WebGL just followed the path of least resistance and though of canvases like OS windows. Parts of if it were not actually designed, they were just done without thinking about it based on what 2D canvas was doing (if thought was given I'm sorry I'm just trying to provoke some thought now). In particular the way contexts work right now it was assumed every canvas should have it's own context. Did anyone question that? It would have been just as valid to create contexts separately from canvases and then bind that context to whatever canvas you currently want to render to. If important it could have been which ever context is first bound to that canvas is the only context that can ever be bound to that canvas but that the same context can be used with as many canvases as you want. That design would have solved sharing resources and broken absolutely nothing. Imagine the WebGL API was this
Similarly people have wanted to be able to not commit the changes (no implicit swapbuffers/resolve). So
That would have solved all of the resource sharing issues. It would also have solved rendering over multiple frames without having to manually render to a framebuffer. Note that native apps expect this behavior, nothing shows up until they call swapbuffers. They don't have that option in the current WebGL API and instead have to write other solutions. If that was how WebGL worked before workers would the suggested APIs for workers change? My point is that canvases are just really 2 textures. A displayBuffer and a drawingBuffer. They don't really need a context per canvas and they don't need to implicitly swap. We went that way mostly because we followed canvas 2d. If we were to imagine a past where we had chosen this other path would we be coming up with better/simpler solutions? I worry about an API where I'm told "browsers will figure out magic behind the scenes to make things performant" when it's possible a different API could remove the magic and just be performant by design. |
As someone coming at this with a bit less graphics background: what are the resources that you're talking about sharing? |
Resources are basically textures and geometry data. As it is now if you load a 2048x2048 texture (16meg) into WebGL it can only be used directly on a single canvas. If you want to use the same texture in another canvas you have to load that texture again into the context for the 2nd canvas because textures (and geometry) can not be shared across contexts (one context per canvas, all resources belong to a single context. The creative workaround is to make a canvas offscreen, render to that, then draw that canvas into the other canvases. That's slow. It can be optimized but no amount of optimization will erase the large copy that's happening to copy from one canvas to another. Why one context per canvas? Because honestly I think it was just assumed to be the right thing to do (if other ways were discussed I'm unaware of them). Why can't you share resources across contexts? Because OpenGL has some very strict rules on how 2 contexts share resources and when changes to a resource made in one context are visible in another context. Those rules are extremely hard to enforce so as to make sure that sharing will work correctly everywhere across all platforms, something important for WebGL. So, sharing was never enabled but that's all based on the idea that there should be one context per canvas. If instead contexts and canvases were disconnected and realizing that canvases are really just themselves textures the entire problem of sharing disappears. If you want to share make one context and use it with as many canvases as you want. If you don't want to share make a new context. |
So in OpenGL API these resources are scoped to a GL context? It seems like one could also address that by retaining the concept of a canvas context, but adding a GL context object (distinct from the canvas context) that could be shared between multiple canvas contexts by passing it to them when they're initialized? |
@greggman if bindCanvas were the primitive of rendering to multiple canvases from a single WebGL context, there would still be synchronization issues if rendering from workers. Fundamentally, workers are not synced with the main thread. It'd be possible to invent new web primitives like the concept of swap groups (see GLX_SGIX_swap_group and GLX_SGIX_swap_barrier), but after much design it was decided to phrase these primitives in terms of existing primitives on the web platform (ImageBitmap, Transferables, postMessage). For rendering to a single canvas from a worker, when those updates are not synchronized with the main thread's DOM updates, OffscreenCanvas and commit() will have excellent performance. The recycling of older textures would be roughly equally complicated if binding a single WebGL context to multiple canvases or OffscreenCanvases, as it would be if resizing a single OffscreenCanvas to multiple dimensions repeatedly and calling transferToImageBitmap / ImageBitmapRenderingContext.transferFromImageBitmap to display the frames. The bindCanvas model would have new gotchas, like what would happen if attempting to bindCanvas one canvas to multiple WebGL contexts, that would have to be thought through. I think we should finish implementing the current APIs and measure the performance characteristics. In Chrome the implementation is finally almost done. If it turns out the ergonomics of the API aren't good or it doesn't perform well for real-world uses then we can look into recasting it. |
Note that there's now a blink intent-to-ship thread for this feature |
Hello. Is there currently a way to have a single WebGLRenderingContext drawing for multiple HTMLCanvasElement (or OffscreenCanvas)? |
I'm not sure where to bring this up but under the current blocking With rAF the browser just doesn't call the callback. With blocking commit though what's the plan?
I can imagine the following patterns with
In case 1 above blocking commit if the page is not the front page seems reasonable. In case 2 it does not because the worker was expecting to be able to process events. How will browsers be able to handle the 2 cases? |
So I tried it and at least in Chrome This example does this
The worker as no way of knowing that size the drawingbuffer needs to be so the main thread sends that info whenever it changes. But, once the worker starts no messages ever arrive from the main thread even though the main thread is sending them. Given that gl.commit is synchronous and many other things are going on it seemed best to call In this sample I swapped the order to Also note using This seems far from completely specced. Having That could be a bug in Chrome but AFAICT it's not wrong based on the spec. I think the spec should be clear how these messages get processed when there's a raf+commit loop as well as just Here's a sample with a commit loop (no rAF) as in
It also tries to download an image with |
As the original rAF creator any input @rocallahan on this seemingly platform breaking API? |
Here's yet another use case that's not working Lets say you'd prefer to run at a smooth fps below 60fps instead of a janky sometimes 60 and sometimes something slower So you do something like
http://jsfiddle.net/greggman/y36aujrf/ apparently this is not supposed to work?!??! It flickers like crazy in Chrome and the bug report response is "works as intended". Really? |
It looks like when requestAnimationFrame was added to workers (which is still in progress in whatwg/html#3677 ), the behavior chosen was to automatically commit frames for OffscreenCanvas contexts which are modified in the rAF callback. This behavior was probably chosen in order to behave most similarly to rAF on the main thread. In this definition it's an error to call commit() from inside a rAF callback on a worker. The spec and implementations should be updated to throw an exception from commit() in this case, to make it clearer. Thanks for putting together your examples. Per your Chrome bug report http://crbug.com/859275 , here's a revised version of your resizing example which removes the call to commit() and which works well: http://jsfiddle.net/gmpkvu30/2/ and a revised version of your 15 FPS animation example which removes the call to commit() and which works well: http://jsfiddle.net/y36aujrf/3/ commit() can usefully be used within setTimeout() callbacks on a worker, and also within WebAssembly threads, where it's basically the only way to get frames out for display. Yes, in this case events aren't processed on the thread, but WebAssembly threads will expect to use the heap's SharedArrayBuffer to transfer data in and out. |
Note, we've taken a look at this issue jointly with #288, please refer to that issue for our latest comments. |
Let's consolidate our discussion into #288. Closing this issue. |
Just to clarify from the F2F discussion today, we're happy with the shape off |
I am reading this thread, to gain an understanding of doing multiple canvas rendering and am a bit confused as to what is the best way of doing this right now? |
For WebGL see this for some ideas on multiple canvases. Even on WebGPU, which is designed from the ground up to render to multiple canvases, the technique at the bottom of that article is the fastest technique. For more details on multiple canvas you should probably ask on stackoverflow or some other more general help forum/discussion |
I am already using that technique, the only caveat being is there is a certain lag/sync issue when scrolling. From your earlier answer, I was instead pondering about drawing to a large canvas (where views are allocated to certain offset) rather than constantly switching draw textures/buffers and then generating a bitmap, and subsequently rendering the parts of the image using |
Requesting that the TAG review the ImageBitmapRenderingContext interface spec'ed here: https://html.spec.whatwg.org/multipage/scripting.html#the-imagebitmap-rendering-context
The text was updated successfully, but these errors were encountered: