Re: [whatwg] Adding features needed for WebGL to ImageBitmap

(Replying on behalf of Gregg, who unfortunately isn't at Google any more)

On Wed, Jul 10, 2013 at 3:17 PM, Ian Hickson <ian@hixie.ch> wrote:
> On Wed, 19 Jun 2013, Gregg Tavares wrote:
>>
>> In order for ImageBitmap to be useful for WebGL we need more options
>
> ImageBitmap is trying to just be a generic HTMLImageElement, that is, a
> bitmap image. It's not trying to be anything more than that.
>
> Based on some of these questions, though, maybe you mean ImageData?

Gregg meant ImageBitmap.

Some background: when uploading HTMLImageElements to WebGL it's
required to be able to specify certain options, such as whether to
premultiply the alpha channel, or perform colorspace conversion.
Because it seemed infeasible at the time to modify the HTML spec,
these options are set via the WebGL API. If they're set differently
from the browser's defaults (which are generally to do
premultiplication, and do colorspace conversion), then the WebGL
implementation has to re-decode the image when it's uploaded to a
WebGL texture. (There's no way to know in advance whether a given
image is intended for upload to WebGL as opposed to insertion into the
document, and making image decoding lazier than it currently is would
introduce bad hiccups while scrolling.)

We'd like to avoid the same problems with the new ImageBitmap concept.

The current ImageBitmap draft has the problem that when the callback
is called, image decoding will already have been done, just like
HTMLImageElement -- at least, this is almost surely how it'll be
implemented, in order to obey the rule "An ImageBitmap object
represents a bitmap image that can be painted to a canvas without
undue latency". Just like HTMLImageElement, these options need to be
set before decoding occurs, to avoid redundant work and rendering
pauses which would happen if operations like colorspace conversion
were done lazily. (By the way, colorspace conversion is typically
implemented inside the image decoder itself, and it would be a lot of
work to factor it out into code which can be applied to a
previously-decoded image. In fact from looking again at the code in
Blink which does this I'd say it's completely infeasible.)


>> premultipliedAlpha: true/false (default true)
>> Nearly all GL games use non-premultipiled alpha textures. So all those
>> games people want to port to WebGL will require non-premultipied textures.
>> Often in games the alpha might not even be used for alpha but rather for
>> glow maps or specular maps or the other kinds of data.
>
> How do you do this with <img> today?

Per above, by specifying the option via the WebGL API, and performing
a synchronous image re-decode. This re-decode is really expensive, and
a major pain point for WebGL developers. It's so bad that developers
are using pure JavaScript decoders for PNG and JPG formats just so
that they can do this on a worker thread.


>> flipY: true/false (default false)
>> Nearly all 3D modeling apps expect the bottom left pixel to be the first
>> pixel in a texture so many 3D engines flip the textures on load. WebGL
>> provides this option but it takes time and memory to flip a large image
>> therefore it would be nice if that flip happened before the callback
>> from ImageBitmap
>
> No pixel is the first pixel in an ImageBitmap. I don't really understand
> what this means.

There's a longstanding difference between the coordinate systems used
by most 2D libraries, and 3D APIs. OpenGL in particular long ago
adopted the convention that the origin of a texture is its lower-left
corner, with the Y axis pointing up.

Every image loading library ever created for OpenGL has had an option
to flip (or not) loaded textures along the Y axis; the option is
required to support pipelines for loading artists' work.

The WebGL spec offers this option via the UNPACK_FLIP_Y_WEBGL state.
http://www.khronos.org/registry/webgl/specs/latest/#TEXIMAGE2D_HTML
defines that the upper left pixel of images is by default the first
pixel transferred to the GPU.

Flipping large images vertically is expensive, taking a significant
percentage of frame time. As with premultiplication of alpha, we want
to avoid doing it unnecessarily, redundantly, or synchronously with
respect to the application. For this reason we want to make it an
option on createImageBitmap so when the callback is called, the
decoded image data is already oriented properly for upload to the GPU.


>> colorspaceConversion: true/false (default true)
>> Some browsers apply color space conversion to match monitor settings.
>> That's fine for images with color but WebGL apps often load heightmaps,
>> normalmaps, lightmaps, global illumination maps and many other kinds of
>> data through images. If the browser applies a colorspace conversion the
>> data is not longer suitable for it's intended purpose therefore many WebGL
>> apps turn off color conversions. As it is now, when an image is uploaded to
>> WebGL, if colorspace conversion is
>> off<http://www.khronos.org/registry/webgl/specs/latest/#PIXEL_STORAGE_PARAMETERS>,
>> WebGL has to synchronously re-decode the image. It would be nice if
>> ImageBitmap could handle this case so it can decode the image without
>> applying any colorspace manipulations.
>
> ImageBitmap doesn't apply any colour space manipulation. That's only done
> when drawing, according to the spec.

The spec may say that, but the reality is that it'll be done during
image decoding. It's infeasible to factor out the colorspace
conversion code from existing JPEG and PNG image decoding libraries.
The implication is that if a WebGL app requires that colorspace
conversion not be performed -- which is the default behavior for most
apps -- a full image re-decode will have to be done.


> On Wed, 19 Jun 2013, Gregg Tavares wrote:
>>
>> colorspaceConversion: true   = browser does whatever it thinks is best for
>> color images.
>> colorspaceConversion: false  = give me the bits in the image file. Don't
>> manipulate them with either embedded color data or local machine gamma
>> corrections or anything else.
>
> This seems like something that should apply when _using_ the image, not in
> the API that just represents the raw image data.
>
> We could provide a way to say "strip color space information from any
> images loaded this way", but I don't understand why you'd include color
> space information that was wrong in the first place.

See above. The reality of browser implementations is that colorspace
conversion is an integral part of image decoding. If the WebGL app
requires that no colorspace conversion be applied during image
decoding then the image has to be re-decoded from the compressed data
with a different set of options.


>>     c = document.createElement("canvas");
>>     ctx = c.getContext("2d");
>>     i = ctx.getImageData(0, 0, 1, 1);
>>     i.data[0] = 255;
>>     ctx.putImageData(i, 0, 0);
>>     i2 = ctx.getImageData(0, 0, 1, 1);
>>     console.log(i2.data[0])  // prints 0 on both FF and Chrome
>
> This is using ImageData, not ImageBitmap. Are we talking about the same
> thing here? I'm confused.

I think Gregg's point here is that most CanvasRenderingContext2D
implementations premultiply the alpha channel into the color channels,
which loses information. From early days of the WebGL spec it was
clear that this would not work for 99% of the 3D use cases which put
arbitrary data, not just colors, in those four channels. This is why
the state parameters UNPACK_FLIP_Y_WEBGL,
UNPACK_PREMULTIPLY_ALPHA_WEBGL, and UNPACK_COLORSPACE_CONVERSION_WEBGL
are in the spec.

The ImageBitmap spec as it stands will require re-decoding of images
when they're uploaded to the GPU for use by WebGL, just like
HTMLImageElement. Let's fix this by adding image decoding options to
createImageBitmap.

-Ken


> --
> Ian Hickson               U+1047E                )\._.,--....,'``.    fL
> http://ln.hixie.ch/       U+263A                /,   _.. \   _\  ;`._ ,.
> Things that are impossible just take longer.   `._.-(,_..'--(,_..'`-.;.'

Received on Wednesday, 10 July 2013 23:13:31 UTC