- From: Ian Hickson <ian@hixie.ch>
- Date: Fri, 9 May 2014 19:02:29 +0000 (UTC)
- To: whatwg@whatwg.org
I had been waiting until the situation around canvas in workers cleared up a bit, but having spoken to some browser vendors it seems that canvas in workers, while desired by everyone, is not a high priority issue and so might not get cleared up for some time. Therefore, I'm now looking again at some other canvas issues that we might be able to resolve first. On Thu, 18 Jul 2013, Justin Novosad wrote: > > To help us iterate further, I've attempted to capture the essence of > this thread on the whatwg wiki, using the problem solving template. I > tried to capture the main ideas that we seem to agree on so far and I > started to think about how to handle special cases. > > http://wiki.whatwg.org/wiki/ImageBitmap_Options Are the "strongly desired options" in the above wiki page still the options we should be adding? (For my own edification, when, how, and why would you use the premultiplyAlpha option? What would using it incorrectly look like?) On Thu, 11 Jul 2013, Kenneth Russell wrote: > > The step of preparing the image for use, either with WebGL or 2D canvas, > is expensive. Today, this step is necessarily done synchronously when an > HTMLImageElement is uploaded to WebGL. The current ImageBitmap proposal > would still require this synchronous step, so for WebGL at least, it > provides no improvement over the current HTML5 APIs. A major goal of > ImageBitmap was to allow Web Workers to load them, and even this ability > currently provides no advantage over HTMLImageElement. > > It is never possible to figure out automatically how the image needs to > be treated when preparing it for use with WebGL. I'm not sure where that > idea came from. On the contrary, there are eight possibilities (2^3), > and different applications require different combinations. The wiki page proposal has 16 possibilities; are some of these unnecessary? On Fri, 12 Jul 2013, Justin Novosad wrote: > > The main concern I have with all this is the potential for OOM crashes. > I'm happy as long as the spec remains vague about what "undue latency" > means, so we still have the possibility of gracefully degrading > performance in low memory conditions by evicting prepared/decoded > buffers when necessary, to only hang on to compressed copies in the > in-memory resource cache. As far as the decode cache is concerned, the > idea I have is to give priority to pixel buffers that are held by > ImageBitmap objects, and only evict them as a last resort. Certainly the spec will always allow such leniency. At the end of the day, performance requirements are meaningless since we can't control the hardware or machine load anyway. On Tue, 16 Jul 2013, Kenneth Russell wrote: > > Additionally, the WebGL spec can be updated to state that the parameters > UNPACK_FLIP_Y_WEBGL, etc. don't apply to ImageBitmap, so the only way to > affect the decoding is with the dictionary of options. Is there some list from WebGL that we can reuse for createImageBitmap()? On Thu, 18 Jul 2013, K. Gadd wrote: > > Ultimately the core here is that without control over colorspace > conversion, any sort of deterministic image processing in HTML5 is off > the table, and you have to write your own image decoders, encoders, and > manipulation routines in JavaScript using raw typed arrays. Maybe that's > how it has to be, but it would be cool to at least support basic > variations of these use cases in Canvas since getImageData/putImageData > already exist and are fairly well-specified (other than this problem, > and some nits around source rectangles and alpha transparency). Given that the user's device could be a very low-power device, or one with a very small screen, but the user might still want to be manipulating very large images, it might be best to do the "master" manipulation on the server anyway. > Out of the features suggested previously in the thread, I would > immediately be able to make use of control over colorspace conversion > and an ability to opt into premultiplied alpha. Not getting > premultiplied alpha, as is the case in virtually every canvas > implementation I've tried, has visible negative consequences for image > quality and also reduces the performance of some use cases where bitmap > manipulation needs to happen, due to the fact that premultiplied alpha > is the 'preferred' form for certain types of rendering and the math > works out better. I think the upsides to getting premultiplication are > the same here as they are in WebGL: faster uploads/downloads, better > results, etc. Can you elaborate on exactly what this would look like in terms of the API implications? What changes to the spec did you have in mind? > To clearly state what would make ImageBitmap useful for the use cases I > encounter and my end-users encounter: > ImageBitmap should be a canonical representation of a 2D bitmap, with a > known color space, known pixel format, known alpha representation > (premultiplied/not premultiplied), and ready for immediate rendering or > pixel data access. It's okay if it's immutable, and it's okay if > constructing one from an <img> or a Blob takes time, as long as once I have > an ImageBitmap I can use it to render and use it to extract pixel data > without user configuration/hardware producing unpredictable results. This seems reasonable, but it's not really detailed enough for me to turn it into spec. What colour space? What exactly should we be doing to the alpha channel? On Wed, 17 Jul 2013, Justin Novosad wrote: > > I am wondering why it is important for image elements to be loaded. Is > it in case the image element goes out of scope or the src attribute > changes before the load completes? If that is the issue, the > implementation can workaround that internally to ensure that the > ImageBitmap is created from whatever resource was referenced by the > source image when createImageBitmap was called. I think it would be > nice to be able to avoid the JS red tape associated with chaining two > async events (image onload -> createImageBitmap). The limitation is mostly just for simplicity in defining and implementing the API. I agree that it could be changed. I've filed this bug to track this, if implementors want to support createImageBitmap() from a non-loaded <img> (waiting for the load to complete) then please comment on the bug: https://www.w3.org/Bugs/Public/show_bug.cgi?id=25634 On Wed, 17 Jul 2013, K. Gadd wrote: > On Wed, Jul 17, 2013 at 5:17 PM, Ian Hickson <ian@hixie.ch> wrote: > > On Tue, 18 Dec 2012, Kevin Gadd wrote: > > > > > > Is it possible to expose the width/height of an ImageBitmap, or even > > > expose all the rectangle coordinates? Exposing width/height would be > > > nice for parity with Image and Canvas when writing functions that > > > accept any drawable image source. > > By 'the other coordinates' I mean that if you constructed it from a > subrectangle of another image (via the sx, sy, sw, sh parameters) it > would be good to expose *all* those constructor arguments. This allows > you to more easily maintain a cache of ImageBitmaps without additional > bookkeeping data. Can you elaborate on this? Do you mean, e.g. making one ImageBitmap per sprite in a sprite sheet? If so, wouldn't you index by the name or ID of the sprite rather than the coordinates of the sprite in the sheet? > > On Tue, 18 Dec 2012, Kevin Gadd wrote: > > > > > > Sorry, upon reading over the ImageBitmap part of the spec again I'm > > > confused: Why is constructing an ImageBitmap asynchronous? > > > > Because it might involve network I/O. > > > > > I thought any decoding isn't supposed to happen until drawImage, so > > > I don't really understand why this operation involves a callback and > > > a delay. Making ImageBitmap creation async means that you *cannot* > > > use this as a replacement for drawImage source rectangles unless you > > > know all possible source rectangles in advance. This is not possible > > > for many, many use cases (scrolling through a bitmap would be one > > > trivial example). > > > > Yeah, it's not supposed to be a replacement for drawImage(). > > This is why I was confused then, since I was told on this list that > ImageBitmap was a solution for the problem of drawing subrectangles of > images via drawImage (since the current specified behavior makes it > impossible to precisely draw a subrectangle). :( Oh. My apologies for any confusion caused here. > The use case is being able to draw lots of different subrectangles of > lots of different images in a single frame. Like, sprites? Wouldn't you know these ahead of time? > You can, it's just significantly more complicated. It's not something > you can easily expose in a user-consumable library wrapper either, since > it literally alters the execution model for your entire rendering frame > and introduces a pause for every group of images that need the use of > temporary ImageBitmap instances. I'm compiling classic 2D games to > JavaScript to run in the browser, so I literally call drawImage hundreds > or thousands of times per frame, most of the calls having a unique > source rectangle. I will have to potentially construct thousands of > ImageBitmaps and wait for all those callbacks. A cache will reduce the > number of constructions I have to do per frame, but then I have to > somehow balance the risk of blowing through the entirety of the end > user's memory (a very likely thing on mobile) or create a very > aggressive, manually flushed cache that may not even have room for all > the rectangles used in a given frame. Given that an ImageBitmap creation > operation may not be instantaneous this really makes me worry that the > performance consequences of creating an ImageBitmap will make it > unusable for this scenario. If you have ImageBitmaps of subregions of a master image, I would imagine browsers could optimise that such that the ImageBitmaps don't take much memory at all. If they are sprites, you would know what they are ahead of time. If they're not, I'm not fully following what you mean. > (I do agree that if you're building a game from scratch for HTML5 Canvas > based on the latest rev of the API, you can probably design for this by > having all your rectangles known in advance - but there are specific > rendering primitives that rely on dynamic rectangles, like for example > filling a progress bar with a texture, tiling a texture within a window, > or scrolling a larger texture within a region. I've encountered all > these in real games.) Can you elaborate on these? > To be clear, I think this is essential because it is a synchronous > operation (this form of ImageBitmap could potentially not even involve a > copy, though I understand if for some reason you can't provide that) and > it's an operation that is extremely common in performance-sensitive 2D > rendering. To me, the GC pressure from ImageBitmap instances is bad > enough; adding an event loop turn and a copy and potentially another > decode is just plain ridiculous. It'll force people to go straight to > WebGL, which would be a shame (especially due to the compatibility > penalty that results from that.) I'm not really understanding why you can't just use drawImage() if you are in fact just drawing arbitrary subparts of a master image. Why would you want to use ImageBitmap? ImageBitmap was mostly about being able to pass images to workers, originally. > Yeah, I thought you were aware that this came up because I *can't* use > drawImage, and it turned out from discussion that it was impossible (or > undesirable) to fix the problems with drawImage. I'm assuming you're referring to the case where if you try to draw a subpart of an image and for some reason it has to be sampled (e.g. you're drawing it larger than the source), the anti-aliasing is optimised for tiling and so you get "leakage" from the next sprite over. If so, the solution is just to separate the sprites by a pixel of transparent black, no? On Thu, 18 Jul 2013, Justin Novosad wrote: > > This is a really good point and case. I was under the impression that > the color bleeding prevention was to be solved with ImageBitmaps, but as > you point out, it breaks down for cutting rectangles on the fly. > Furthermore, I think there is also no good solution for synchronously > cutting rectangles out of animated image sources like an animated canvas > or a playing video. Two possible solutions that were brought up so far > on this list: > > a) have synchronous versions of createImageBitmap ImageBitmap wasn't meant for these cases. If you want to make a new image in this way, I would recommend using drawImage() onto a new canvas. > b) have a rendering option to modify drawImage's edge filtering behavior > (either an argument to drawImage or a rendering context attribute) Yeah, maybe we should do that. I filed a bug: https://www.w3.org/Bugs/Public/show_bug.cgi?id=25635 If any vendors want to implement something like that, comment on the bug. -- Ian Hickson U+1047E )\._.,--....,'``. fL http://ln.hixie.ch/ U+263A /, _.. \ _\ ;`._ ,. Things that are impossible just take longer. `._.-(,_..'--(,_..'`-.;.'
Received on Friday, 9 May 2014 19:04:07 UTC