Re: [whatwg] WebGL and ImageBitmaps

Replies inline

On Wed, May 14, 2014 at 4:27 PM, Glenn Maynard <glenn@zewt.org> wrote:
> On Mon, May 12, 2014 at 3:19 AM, K. Gadd <kg@luminance.org> wrote:
>>
>> This is the traditional solution for scenarios where you are sampling
>> from a filtered texture in 3d. However, it only works if you never
>> scale images, which is actually not the case in many game scenarios.
>
>
> That's only an issue when sampling without premultiplication, right?
>
> I had to refresh my memory on this:
>
> https://zewt.org/~glenn/test-premultiplied-scaling/
>
> The first image is using WebGL to blit unpremultiplied.  The second is WebGL
> blitting premultiplied.  The last is 2d canvas.  (We're talking about canvas
> here, of course, but WebGL makes it easier to test the different behavior.)
> This blits a red rectangle surrounded by transparent space on top of a red
> canvas.  The black square is there so I can tell that it's actually drawing
> something.
>
> The first one gives a seam around the transparent area, as the white pixels
> (which are completely transparent in the image) are sampled into the visible
> part.  I think this is the problem we're talking about.  The second gives no
> seam, and the Canvas one gives no seam, indicating that it's a premultiplied
> blit.  I don't know if that's specified, but the behavior is the same in
> Chrome and FF.

The reason one pixel isn't sufficient is that if the minification
ratio is below 50% (say, 33%), sampling algorithms other than
non-mipmapped-bilinear will begin sampling more than 4 pixels (or one
quad, in gpu shading terminology), so you now need enough transparent
pixels around all your textures to ensure that sampling never crosses
the boundaries into another image.

http://fgiesen.wordpress.com/2011/07/10/a-trip-through-the-graphics-pipeline-2011-part-8/
explains the concept of quads, along with relevant issues like
centroid interpolation. Anyone talking about correctness or
performance in modern accelerated rendering might benefit from reading
this whole series.

You do make the good point that whether or not the canvas
implementation is using premultiplied textures has an effect on the
result of scaling and filtering (since doing scaling/filtering on
nonpremultiplied rgba produces color bleeding from transparent
pixels). Is that currently specified? I don't think I've seen bleeding
artifacts recently, but I'm not certain whether the spec requires this
explicitly.

This issue is however not color bleeding - color bleeding is a math
'error' that results from not using premultiplication - but that the
filtering algorithm samples pixels outside the actual 'rectangle'
intended to be drawn. (This is an implicit problem with sampling based
on texture coordinates and derivatives instead of pixel offsets)

If you search for 'padding texture atlases' you can see some examples
that show why this is a tricky problem and a single pixel of padding
is not sufficient:
http://wiki.polycount.com/EdgePadding

There are some related problems here for image compression as well,
due to the block-oriented nature of codecs like JPEG and DXTC. Luckily
they aren't something the user agent has to deal with in their canvas
implementation, but that's another example where a single pixel of
padding isn't enough.

> On Tue, May 13, 2014 at 8:59 PM, K. Gadd <kg@luminance.org> wrote:
>> I thought I was pretty clear about this... colorspace conversion and
>> alpha conversion happen here depending on the user's display
>> configuration, the color profile of the source image, and what browser
>> you're using. I've observed differences between Firefox and Chrome
>> here, along with different behavior on OS X (presumably due to their
>> different implementation of color profiles).
>>
>> In this case 'different' means 'loading & drawing an image to a canvas
>> gives different results via getImageData'.
>
>
> That's a description, not an explicit example.  An example would be a URL
> demonstrating the issue.

http://joedev.net/JSIL/Numbers/ was the first game to report an issue
from this, because his levels are authored as images. He ended up
solving the problem by following my advice to manually strip color
profile information from all his images (though this is not a panacea;
a browser could decide that profile-information-less images are now
officially sRGB, and then profile-convert them to the display profile)

It's been long enough that I don't know if his uploaded build works
anymore or whether it will demonstrate the issue. It's possible he
removed his dependency on images by now.

Here is what I told the developer in an email thread when he first
reported the issue (and by 'reported' I mean 'sent me a very confused
email saying that his game didn't work in Firefox and he had no idea
why'):

> The reason it's not working in Firefox right now is due to a firefox bug, because your PNG files contain what's called a 'sRGB chunk': https://bugzilla.mozilla.org/show_bug.cgi?id=867594
> I don't know if this bug can be fixed on Firefox's side because it's an area where things are bad, so the best option is to fix the PNG files yourself. You can do this using the 'pngcrush' utility with a command line like this:
>
> pngcrush -ow -rem sRGB *.png
>
> It seems like your image editor added a sRGB chunk to all your images; the problem is that this causes their color data to get modified depending on your monitor's color profile. If you have trouble figuring out how to do this, let me know and I > can try to do it myself. I may have to add support for this to JSIL.


> The effects of color profiles should never be visible to script--they should
> be applied when the canvas is drawn to the screen, not when the image is
> decoded or the canvas is manipulated.  That seems hard to implement, though,
> if you're blitting images to a canvas that all have different color
> profiles.  It's probably better to ignore color profiles for canvas entirely
> than to expose the user's monitor configuration like this...

I agree with this, but I'm not going to assume it's actually possible
for a canvas implementation to work this way. I assume that color
profile conversions are non-trivial (in fact, I'm nearly certain they
are non-trivial), so doing the conversion every time you render a
canvas to the compositor is probably expensive, especially if your GPU
isn't powerful enough to do it in a shader (mobile devices, perhaps) -
so I expect that most implementations do the conversion once at load
time, to prepare an image for rendering. Until it became possible to
retrieve image pixels with getImageData, this was a good, safe
optimization.

A similar problem in 2d/3d rendering is the difference between
gamma-corrected and linear lighting spaces. Essentially,
gamma-corrected is what you want for presentation to the monitor,
because it matches the response curve of the display. For compositing
and lighting you want to operate in linear space, so that the
brightness differential 'x' for '(a-b) = x' is the same regardless
what value 'a' has. This involves being able to tell a GPU that a
texture or framebuffer is linear or gamma-corrected, and being able to
ask it to do conversions between linear and gamma corrected (or doing
them yourself). A few references on this subject:

http://renderwonk.com/blog/index.php/archive/adventures-with-gamma-correct-rendering/
http://www.altdevblogaday.com/2011/06/02/yet-another-post-about-gamma-correction/
http://blog.wolfire.com/2010/02/Gamma-correct-lighting

I should point out that this is another big issue that canvas may fail
on, but I haven't run into it personally so I have no test cases.
There are probably developers who care a lot about it, but once WebGL
exposes the relevant GL extensions they will probably be able to
resolve it themselves (IIRC there are some standard-ish GL extensions
for linear space lighting and blending, now.)

People doing photo manipulation and other things using Canvas, where
precision and linearity are important to them, may actually care about
this right now.

Received on Thursday, 15 May 2014 02:22:11 UTC