- From: Charles Pritchard <chuck@jumis.com>
- Date: Tue, 23 Nov 2010 17:32:41 -0800
- To: "Tab Atkins Jr." <jackalmage@gmail.com>
- CC: public-fx@w3.org, robert@ocallahan.org
On 11/23/10 4:25 PM, Tab Atkins Jr. wrote: > On Tue, Nov 23, 2010 at 4:12 PM, Charles Pritchard<chuck@jumis.com> wrote: > >> There's a difference between using -element to make a dynamic background (or >> otherwise), and using it to fill a 'vector' element. We can use existing >> functions, with the existing canvas element, but I think it'd be wise to >> consider the "CSS Context" as an automated alternative. >> >> This is basically what I'm thinking of, as the typical case: >> >> // with a css transition applied to the canvas/parentNode on width changes. >> var increaseBy = 2; >> parentNode.style.width *= increaseBy; ctx.canvas.style.width *= increaseBy; >> ontransitionend = function() { >> ctx.width = parseInt(parentNode.style.width); >> ctx.scale(increaseBy,increaseBy); >> myDrawCommands(); >> } >> >> There are all sorts of tricky-techniques that could be used within that >> transition duration to make it more aesthetically pleasing. For instance, >> hooking into transitionstart instead of end, would mean that the image is >> not progressively blurrier during the transformation animation. >> > Are you suggesting that authors actually write the above code, or that > browsers do something magical that's equivalent to that code? > I'm suggesting that browsers do some magic for "CSS Canvas"; I'm stating that authors will/do actually write that code when the use case requires it, with HTML Canvas. >> Things get confusion/complex, if the canvas is assigned as a paint server to >> multiple elements having different sizes. >> >> For example: if the same paint server is assigned to a 20x20 area, and to a >> 1000x1000 area, it'd be faster [CPU cycles] to render to two different >> backing stores, than to size-down the 1000x1000 bitmap. >> > Possibly. That depends on what's being drawn, and how efficient the > implementation is. (For example, Chrome currently is much less > efficient at scaling<canvas> than<img>, but that's just a bug - we > cache scaled<img>s so we don't have to continually recompute, but we > don't do this for<canvas>.) > It's the possibly that makes it a good candidate for CSS/SVG behavior, as the SVG rendering implementation runs a lot of possibly-style logic. Regarding scaling -- if the [canvas] tag is animated/repainted, a cache won't help. Recomputing a 1mpx image down to 400px may be slower than rasterizing twice. But again, that's one of those estimation cases for the SVG implementation. >> Another issue, if we are toying with the bitmap backing, is considering >> ImageData/CanvasPixelArray performance. Because we're talking about the >> realm of SVG, generally, the user should be using SVG FE, and not ImageData. >> > Agreed that the author should be using SVG itself if possible, rather > than scripting at a canvas. But scripting at a canvas is great when > you're trying to do something that doesn't have an existing > abstraction. > > I certainly agree with that. I wonder if it makes more sense to do those operations on a standard HTML Canvas, and then use drawImage to transfer the result to a CSS managed canvas. Automatic sizing/management of the backing store might lead to some strangeness with CanvasPixelArray/ImageData. WebGL filters would be better candidates for most pixel manipulation, than ImageData, in the context of SVG, as SVG doesn't expose pixel data to the scripting environment.
Received on Wednesday, 24 November 2010 01:33:13 UTC