W3C home > Mailing lists > Public > whatwg@whatwg.org > May 2014

[whatwg] ImageBitmap feature requests

From: Ian Hickson <ian@hixie.ch>
Date: Fri, 9 May 2014 21:51:53 +0000 (UTC)
To: "whatwg@whatwg.org" <whatwg@whatwg.org>
Message-ID: <alpine.DEB.2.00.1405092050450.13733@ps20323.dreamhostps.com>
On Thu, 18 Jul 2013, Justin Novosad wrote:
> On Thu, Jul 18, 2013 at 12:50 PM, Ian Hickson <ian@hixie.ch> wrote:
> > On Wed, 9 Jan 2013, Ashley Gullen wrote:
> > >
> > > Some developers are starting to design large scale games using our 
> > > HTML5 game engine, and we're finding we're running in to memory 
> > > management issues.  Consider a device with 50mb of texture memory 
> > > available.  A game might contain 100mb of texture assets, but only 
> > > use a maximum of 30mb of them at a time (e.g. if there are three 
> > > levels each using 30mb of different assets, and a menu that uses 
> > > 10mb of assets).  This game ought to fit in memory at all times, but 
> > > if a user agent is not smart about how image loading is handled, it 
> > > could run out of memory.
> >
> > The Web API tries to use garbage collection for this; the idea being 
> > that you load the images you need when you need them, then discard 
> > then when you're done, and the memory gets reclaimed when possible.
> 
> This is probably an area where most browsers could do a better job. 
> Browsers should be able to handle the texture memory issues 
> automatically without any new APIs, if they can't, then file bug 
> reports.  If garbage collection is not kicking-in at the right time, 
> report it to the vendor. ImageBitmap should provide the same kind of 
> pinning semantics as the suggested ctx.load/unload.

This is good to know. If you are an author finding these problems, please 
do file bugs!


> However, one weakness of the current API is that upon construction of 
> the ImageBitmap, the browser does not know whether the asset will be 
> used with a GPU-accelerated rendering context or not. If this 
> information were available, the asset could be pre-cached on the GPU 
> when appropriate.  Maybe something like ctx.prefetch(image) would be 
> appropriate for warming up the caches.

Is this a measurable performance problem currently? I'd hate to provide 
such an API, which could easily be misunderstood or misused, only to find 
that in practice things already work ok.


On Fri, 19 Jul 2013, Ashley Gullen wrote:
>
> FWIW, imageBitmap.discard() wouldn't be unprecedented - WebGL allows you 
> to explicitly release memory with deleteTexture() rather than letting 
> the GC collect unused textures.

What has implementation experience been with this API? Is it misused much?


On Fri, 19 Jul 2013, Justin Novosad wrote:
>
> A related issue we have now is with canvas backing stores. It is common 
> for web apps to create temporary canvases to do some offscreen 
> rendering. When the temporary canvas goes out of scope, it continues to 
> consume RAM or GPU memory until it is garbage collected. Occasionally 
> this results in memory-leak-like symptoms.  The usual workaround is to 
> use a single persistent global canvas for offscreen work instead of 
> temporary ones (yuck).  This could be handled in a cleaner way if there 
> were a .discard() method on canvases elements too.

Would setting the canvas dimensions to zero have the same effect?

We could have a method that just sets the dimensions to zero, if so, and 
if this is common enough to warrant a convenience method.


On Fri, 19 Jul 2013, K. Gadd wrote:
>
> Some of my applications would definitely benefit from this as well. A 
> port of one client's game managed to hit around 1GB of backing 
> store/bitmap data combined when preloading all their image assets using 
> <img>. Even though browsers then discard the bitmap data, it made it 
> difficult to get things running without killing a tab due to hitting a 
> memory limit temporarily. (The assets were not all in use at once, so 
> the actual usage while playing is fine). Having explicit control over 
> whether bitmaps are resident in memory would be great for this use case 
> since I can preload the actual file over the network, then do the actual 
> async forced decode by creating an ImageBitmap from a Blob, and discard 
> it when the pixel data is no longer needed (the game already has this 
> information since it uses the C# IDisposable pattern, where resources 
> are disposed after use)

Well, browsers should be more aggressive about garbage collecting if the 
lack of having garbage collected is causing performance issues due to lack 
of RAM, no? Have you filed any bugs on browsers for this? Justin's 
comments above suggests we should maybe start with that.


On Tue, 13 Aug 2013, Kenneth Russell wrote:
> >
> > We could have a constructor for ImageData objects, sure. That would be 
> > relatively easy to add, if it's really needed. I don't understand why 
> > it's hard to keep track of ImageData objects, though. Can you 
> > elaborate?
> 
> I have in mind new APIs for typed arrays which allow sharding of typed 
> arrays to workers and re-assembly of the component pieces when the work 
> is complete. This would involve multiple manipulations of the 
> ArrayBuffer and its views. It would be most convenient if the result 
> could be wrapped in an ImageData if it's destined to be drawn to a 
> Canvas. Otherwise it's likely that a data copy will need to be incurred.

Does the (relatively) new constructor for ImageData address this 
sufficiently?


On Tue, 17 Dec 2013, David Flanagan wrote:
>
> The camera resolution on mobile devices has grown (and is continuing to 
> grow) much faster than the screen size and memory of those devices. In 
> my work with FirefoxOS, I work with devices that have camera sensors 
> that can capture 5 megapixels images but have 320x480 pixel (0.15 
> megapixel) screens. This means that photos from the camera are 33 times 
> as large as the screen.
> 
> An RGBA image format requires 4 bytes per pixels for decoded image data, 
> so if I want to decode one of these 5mp images for display on my 0.15mp 
> screen, I have to allocate 20mb of memory. This particular low-end 
> device I'm talking about has 256mb of RAM, and less than 200mb available 
> for apps, so displaying a single photo requires more than 10% of 
> available memory.
> 
> To make this work in the initial releases of FirefoxOS, we've limited 
> the camera resolution to 2 or 3mp on low-memory devices and have ensured 
> that our Camera app includes screen-sized EXIF preview images in the 
> photos it captures. We use JavaScript to extract the EXIF preview from 
> the photo and display that, when we can, instead of the actual image. So 
> initial display of a photo does not actually require us to decode the 
> full-size photo. But as soon as the user zooms in, we have do have to 
> decode it and take the memory hit. The result is that on low-end 
> FirefoxOS phones background apps (including the homescreen) are commonly 
> killed while using the Gallery app.
> 
> The web platform has two mechanisms for decoding images: the <img> 
> element and the new window.createImageBitmap() function. Native 
> libraries exist that can downsample an image while decoding it, but the 
> web platform does not expose this feature. This is a fundamental 
> shortcoming: Web apps will not be able to achieve parity with native 
> photo display and processing apps until they are able to decode and 
> downsample a large image into a smaller bitmap.

I've filed a bug to track the idea of being able to scale an image when 
create the ImageBitmap object:

   https://www.w3.org/Bugs/Public/show_bug.cgi?id=25641


> 3) Sometimes we want to decode and downsample a Blob but do not know the 
> pixel size or aspect ratio of the original image, so we cannot specify 
> exact dw, dh values.  My main use case here is to obtain a decoded image 
> that is no bigger than necessary but maintains the aspect ratio of the 
> original.  One way to get this would be to allow maxWidth and maxHeight 
> properties in the options dictionary.  If those properties were defined, 
> then createImageBitmap() would maintain the aspect ratio and create an 
> ImageBitmap that is no wider than maxWidth and no higher than maxHeight.  
> Another, more flexible, solution would be to allow a size property in 
> the dictionary. If size was omitted, then the dw and dh properties would 
> be the actual size of the ImageBitmap, even if that resulted in 
> distortion. If size was set to "contain", then the image would be 
> downsampled to be as large as possible while still being contained 
> within dw and dh and while preserving aspect ratio. (This is equivalent 
> to the maxWidth and maxHeight properties).  And if size was "cover", 
> then aspect ratio would be preserved, the resulting ImageBitmap would be 
> exactly dw by dh pixels, but the image would be cropped along the top 
> and bottom or the left and right edges to fit. Note that the names 
> "cover" and "contain" come from the CSS background-size property.

I've noted this idea in the bug above.


> 4) Even when we downsample an image while decoding it, we may still need 
> to know the full size of the original. In a photo gallery app, for 
> example, we need to know how big the original is so that we know how far 
> we can allow the user to zoom in to the image. It is possible to 
> determine this by parsing the image file ourselves in JavaScript, but it 
> would be much more convenient if the web platform provided a way to 
> determine the full size of an image without having to decode the entire 
> thing at the cost of 4 bytes per pixel.  Therefore I propose that the 
> ImageBitmap include fullWidth and fullHeight properties that specify the 
> full size of the ImageBitmapSource from which it was derived.  I suspect 
> (but do not have an explicit use case for) that it might also be helpful 
> to include the sx, sy, sw, and sh arguments that are passed to 
> createImageBitmap on the returned ImageBitmap.

Kevin had similar feedback in the thread I responded to earlier today:

   http://lists.w3.org/Archives/Public/public-whatwg-archive/2014May/0067.html

I've filed a bug on this too:

   https://www.w3.org/Bugs/Public/show_bug.cgi?id=25646


> 5) Once a large image is decoded and downsampled into a smaller 
> ImageBitmap, the only thing that we can do with that ImageBitmap is to 
> copy it into a Canvas, either for display to the end user (as an 
> alternative to an <img>) or for re-encoding with Canvas.toBlob() (when 
> creating thumbnails for large images). The motivation for this 
> downsampling feature is memory use. But having to copy an ImageBitmap 
> into a canvas in order to use it immediately doubles the amount of 
> memory required. So for this reason, I also want to propose that 
> ImageBitmap have a transferToCanvas() method akin to the 
> transferToImageBitmap() and transferToImage() methods proposed at 
> http://wiki.whatwg.org/wiki/WorkerCanvas.  transferToCanvas would 
> transfer the image data into a new Canvas object and would neuter the 
> ImageBitmap so that it could no longer be used.

This is an interesting idea. I don't know what the state of the other 
methods discussed here is (see my comment at the top of the e-mail cited 
above). However, I've filed a bug for this too:

   https://www.w3.org/Bugs/Public/show_bug.cgi?id=25647

I think what might make the most sense here is to have a way to 
destructively convert an ImageBitmap into an <img>, rather than doing 
anything with a canvas.

-- 
Ian Hickson               U+1047E                )\._.,--....,'``.    fL
http://ln.hixie.ch/       U+263A                /,   _.. \   _\  ;`._ ,.
Things that are impossible just take longer.   `._.-(,_..'--(,_..'`-.;.'
Received on Friday, 9 May 2014 21:55:01 UTC

This archive was generated by hypermail 2.3.1 : Monday, 13 April 2015 23:09:28 UTC