Re: [whatwg] WebGL and ImageBitmaps

On Wed, May 14, 2014 at 6:27 PM, Glenn Maynard <glenn@zewt.org> wrote:

> That's only an issue when sampling without premultiplication, right?
>
> I had to refresh my memory on this:
>
> https://zewt.org/~glenn/test-premultiplied-scaling/
>
> The first image is using WebGL to blit unpremultiplied.  The second is
> WebGL blitting premultiplied.  The last is 2d canvas.  (We're talking about
> canvas here, of course, but WebGL makes it easier to test the different
> behavior.)  This blits a red rectangle surrounded by transparent space on
> top of a red canvas.  The black square is there so I can tell that it's
> actually drawing something.
>
> The first one gives a seam around the transparent area, as the white
> pixels (which are completely transparent in the image) are sampled into the
> visible part.  I think this is the problem we're talking about.  The second
> gives no seam, and the Canvas one gives no seam, indicating that it's a
> premultiplied blit.  I don't know if that's specified, but the behavior is
> the same in Chrome and FF.
>

It looks right on red, but if the background is green you can still see the
post-premultiplied black being pulled in.  It's really just GL_REPEAT that
you want, repeating the outer edge.


On Wed, May 14, 2014 at 9:21 PM, K. Gadd <kg@luminance.org> wrote:

> The reason one pixel isn't sufficient is that if the minification
> ratio is below 50% (say, 33%), sampling algorithms other than
> non-mipmapped-bilinear will begin sampling more than 4 pixels (or one
> quad, in gpu shading terminology), so you now need enough transparent
> pixels around all your textures to ensure that sampling never crosses
> the boundaries into another image.
>

I'm well aware of the issues of sampling sprite sheets; I've dealt with the
issue at length in the past.  That's unrelated to my last mail, however,
which was about premultiplication (which is something I've not used as
much).


> I agree with this, but I'm not going to assume it's actually possible
> for a canvas implementation to work this way. I assume that color
> profile conversions are non-trivial (in fact, I'm nearly certain they
> are non-trivial), so doing the conversion every time you render a
> canvas to the compositor is probably expensive, especially if your GPU
> isn't powerful enough to do it in a shader (mobile devices, perhaps) -
> so I expect that most implementations do the conversion once at load
> time, to prepare an image for rendering. Until it became possible to
> retrieve image pixels with getImageData, this was a good, safe
> optimization.
>

What I meant is that I think color correction simply shouldn't apply to
canvas at all.  That may not be ideal, but I'm not sure of anything else
that won't cause severe interop issues.

To be clear, colorspace conversion--converting from sRGB to RGB--isn't a
problem, other than probably needing to be specified more clearly and being
put behind an option somewhere, so you can avoid a lossy colorspace
conversion.  The problem is color correction that takes the user's monitor
configuration into account, since the user's monitor settings shouldn't be
visible to script.  I don't know enough about color correction to know if
this can be done efficiently in an interoperable way, so the data scripts
see isn't affected by the user's configuration.

-- 
Glenn Maynard

Received on Thursday, 15 May 2014 02:45:50 UTC