Re: [css3-background] vastly different takes on "blur"

On Fri, Jun 25, 2010 at 9:51 AM, Brad Kemper <brad.kemper@gmail.com> wrote:
> To get a better handle on counting the pixels, and seeing what browsers do, and what a human can perceive, I created a little experiment, that you can view here:
>
> http://www.bradclicks.com/cssplay/small-shadow-blurs.html
>
> I noticed that it was pretty hard to see the very lightest pixels, and in the dark areas there were even more shades that I could not perceive. So I colorized the results like a night vision scope to improve that.

Ah, very nice.  Thanks!


> The renderings are based on a Webkit nightly from Apple, on the Mac, but it looks like Firefox does the same. Some observations as I worked on this:
>
> • It seems that having an odd number of pixels for the total width of the blend is not possible, and floats are rounded down (I think). Is this a Gaussian blur limitation, or something else?

It surprises me somewhat that floats appear to be rounded down.  I'd
expect 2.5px of blur to be halfway between 2px and 3px.


> • Shadows can be any color, including white, so I can't really recommend that we clip the range based on a per-color perception of whether or not a pixel is distinguishable from totally opaque or totally transparent. But it does seem like 3% black is almost indistinguishable from white, and that 97% black (or green-black in my experiment) is almost indistinguishable from total black, so maybe there should be limits in that range for fitting the transition to the space. Simon mentioned that implementations already clip the infinite edge of the Gaussian blur at some imperceptible point.

Right, per-color perception is silly.  3%-97% has merit, but I'm a
little more comfortable going all the way to 1%-99% as the bound for
"imperceptible".  It's somewhat less arguable, and has a slightly less
arbitrary feel to it.


> • I would not want an implementation to use a blur algorithm where each pixel was 1/10 the opacity of the pixel next to it, but the current definition does not disallow that. If we limit the transition to be in, e.g. the 3% - 97% range, then perhaps we can thereby mitigate some of the more ridiculous blur algorithms.

I'd be okay with locking down the algo a little bit more, but I don't
think we have to do too much.  Simple prettiness considerations will
ensure that impls don't do something retarded.


> • I really want the definition to be results-based, as it is in the LCWD, and not something based on a recipe such as "insert a number in to a black box whose workings is not well-understood by ordinary people, and accept whatever rendering comes out of it." Even if that is how canvas defines it.

STRONGLY agree.  Whatever is decided, the effect should be
*predictable* to authors.  Once they get a modicum of familiarity with
the feature, they should be able to take a blur value and predict
exactly the shape of the rendering.


> • If this is what 'text-shadow' does (there seems to be some debate about that still), and if odd numbers are really not possible, then I think I will join the enemy camp about measuring outward distances instead of total blend width. But it really has nothing to do with the arguments I heard prior to the one about text-shadow (which I thought were just plain silly). But I still want something predictable, even if it means slight adjustments to the definitions of both box-shadow and text-shadow.

It appears that text-shadow is either "outward expanse of blur = blur
length" (webkit, seemingly for both large and small values of blur) or
"outward expanse of blur = roughly blur length * 1.6" (Firefox).  I
haven't checked Opera yet.

~TJ

Received on Friday, 25 June 2010 17:22:38 UTC