- From: Rik Cabanier <cabanier@gmail.com>
- Date: Wed, 14 May 2014 21:41:28 -0700
- To: Katelyn Gadd <kg@luminance.org>
- Cc: WHAT Working Group <whatwg@whatwg.org>, Jürg Lehni <lists@scratchdisk.com>, Ian Hickson <ian@hixie.ch>
On Wed, May 14, 2014 at 7:30 PM, K. Gadd <kg@luminance.org> wrote: > Is it ever possible to make canvas-to-canvas blits consistently fast? > It's my understanding that browsers still make > intelligent/heuristic-based choices about which canvases to > accelerate, if any, and that it depends on the size of the canvas, > whether it's in the DOM, etc. I've had to report bugs related to this > against firefox and chrome in the past, I'm sure more exist. There's > also the scenario where you need to blit between Canvas2D canvases and > WebGL canvases - the last time I tried this, a single blit could cost > *hundreds* of milliseconds because of pipeline stalls and cpu<->gpu > transfers. > Chrome has made some optimizations recently in this area and will try to keep everything on the GPU for transfers between canvas 2d and WebGL. Are you still seeing issues there? > Canvas-to-canvas blits are a way to implement layering, but it seems > like making it consistently fast via canvas-canvas blits is a much > more difficult challenge than making sure that there are fast&cheap > ways to layer separate canvases at a composition stage. The latter > just requires that the browser have a good way to composite the > canvases, the former requires that various scenarios with canvases > living in CPU and GPU memory, deferred rendering queues, etc all get > resolved efficiently in order to copy bits from one place to another. > Small canvas's are usually not hardware accelerated. Do you have any data that this is causing slowdowns? Layering should also mitigate this since if the canvas is HW accelerated, so should its layers. > (In general, I think any solution that relies on using > canvas-on-canvas drawing any time a single layer is invalidated is > suspect. The browser already has a compositing engine for this that > can efficiently update only modified subregions and knows how to cache > reusable data; re-rendering the entire surface from JS on change is > going to be a lot more expensive than that. I don't think the canvas code is that smart. I think you're thinking about drawing SVG and HTML. > Don't some platforms > actually have compositing/layers at the OS level, like CoreAnimation > on iOS/OSX?) Yes, but AFAIK they don't use this for Canvas. > On Wed, May 14, 2014 at 6:30 AM, Jürg Lehni <lists@scratchdisk.com> wrote: > > On Apr 30, 2014, at 00:27 , Ian Hickson <ian@hixie.ch> wrote: > > > >> On Mon, 7 Apr 2014, Jürg Lehni wrote: > >>> > >>> Well this particular case, yes. But in the same way we allow a group of > >>> items to have an opacity applied to in Paper.js, and expect it to > behave > >>> the same ways as in SVG: The group should appear as if its children > were > >>> first rendered at 100% alpha and then blitted over with the desired > >>> transparency. > >>> > >>> Layers would offer exactly this flexibility, and having them around > >>> would make a whole lot of sense, because currently the above can only > be > >>> achieved by drawing into a separate canvas and blitting the result > over. > >>> The performance of this is real low on all browsers, a true bottleneck > >>> in our library currently. > >> > >> It's not clear to me why it would be faster if implemented as layers. > >> Wouldn't the solution here be for browsers to make canvas-on-canvas > >> drawing faster? I mean, fundamentally, they're the same feature. > > > > I was perhaps wrongly assuming that including layering in the API would > allow the browser vendors to better optimize this use case. The problem > with the current solution is that drawing a canvas into another canvas is > inexplicably slow across all browsers. The only reason I can imagine for > this is that the pixels are copied back and forth between the GPU and the > main memory, and perhaps converted along the way, while they could simply > stay on the GPU as they are only used there. But reality is probably more > complicated than that. > > > > So if the proposed API addition would allow a better optimization then > I'd be all for it. If not, then I am wondering how I can get the vendor's > attention to improve this particular case. It really is very slow > currently, to the point where it doesn't make sense to use it for any sort > of animation technique. > > > > J > > > > > > > > > > >
Received on Thursday, 15 May 2014 04:41:54 UTC