Re: [css3-background] Where we are with Blur value discussion

On Fri, Jul 16, 2010 at 3:17 PM, Tab Atkins Jr. <jackalmage@gmail.com> wrote:
> On Thu, Jul 15, 2010 at 6:36 PM, Tab Atkins Jr. <jackalmage@gmail.com> wrote:
>> As a contribution to the discussion, here's a hand-rolled
>> implementation of gaussian blur you can play with to get a sense for
>> how exactly it works.
>>
>> http://www.xanthir.com/etc/blurplay/
>>
>> It appears that just halving the length gives a good sigma for lengths
>> 40px or less, but that approximation gradually becomes worse as the
>> blur gets larger (you gradually want something larger than half the
>> length).
>>
>> ~TJ
>>
>> (By the way, I can see why gaussians are so sucky to work with.  I was
>> getting detectable floating-point errors nearly the entire time, which
>> were visible to the naked eye once the stdev topped 30 or so.  I had
>> to contort my computations to keep decent accuracy.)
>
> So, it turns out the errors weren't due to floating-point precision.
> I hadn't realized that CanvasPixelArray takes an Octet instead of a
> Number, and so I was losing a lot of precision due to the automatic
> flooring that was happening.  Tweaking my algorithm slightly produces
> ideal results with no contortions necessary.
>
> Now, the relationship between stdev and blur length is clear.
> Amazingly enough, if we define the length as "the distance from the
> edge where it reaches 98% transparency", then the stdev we need to
> produce that is just half the length.
>
> Yup, just divide the length by 2 and you have the appropriate stdev to
> feed the guassian.  This relationship holds basically perfectly all
> the way from 0 to 100stdevs (that is, 0 to 200px blur length).  This
> approximation eventually fails, but I don't know if that's an actual
> divergence or just actual floating point errors finally catching up to
> me.  I'd want to run this with some infinite-precision arithmetic to
> verify.
>
> So, we have a relatively easy specification - just say that the blur
> must approximate a gaussian with a stdev of half the blur length.
> Then we can put up a little bit of language defining ranges that the
> approximation must fall within, which we were already planning on.
>
> I'm gonna call this problem solved, then.
>
> ~TJ

I am not a mathematician, so please correct me if I make a mistake
here. If you don't care why Tab's stdev = blur_radius / 2 relationship
works, feel free to skip.

I also played around with finding a general inverse of a Gaussian
function and failed to find pretty (or even Real) results. But a
Gaussian blur is of course a *convolution* of the Gaussian function
(essentially taking a weighted average of surrounding pixel values,
based on the value of the function at that distance), so the actual
question is one of volume: finding how many fully opaque pixels are
needed, and at what distance, so that their biased value adds xx%
opaque-ness to an otherwise transparent pixel. Still not a simple
question in general, but "for a long, straight shadow edge," it's
easy, as long as "long" is long enough.**

Gaussian blurs are linearly separable; a 2D blur is the same as
consecutive 1D blurs, one horizontal and the other vertical. In Tab's
test, the horizontal blur over the color divide spreads things out
left to right, but the subsequent vertical blur has no effect. Each
column of pixels is at that point a single color, and a weighted
average of one color leaves the pixels unchanged (which is why Tab
doesn't do the vertical blur). The same thing (conceptually) happens
for long, straight shadow edges of any orientation, since a blur can
be separated arbitrarily as long as the two blur directions are
perpendicular.

Since one direction of blur has no effect, this becomes a 1D problem
which is already well explored: the area under the 1D Gaussian curve,
for all inputs with magnitude greater than some value. It's basically
just asking for this:

http://upload.wikimedia.org/wikipedia/commons/8/8c/Standard_deviation_diagram.svg

A sampling of pixels within 2 standard deviations on each side of a
pixel includes about 95.4% of what will be the blurred value of that
pixel. In the simplest case, the shadow edge is on just one side of
the pixel, so the missing 4.6% is actually halved. In Tab's example,
again, a black pixel 2 standard deviations to the right of the white
pixels will be blurred to a value of about 2.3% white, 97.7% black,
regardless of the value of the standard deviation. Hence, stdev =
blur_radius / 2 for about 98% transparency at blur_radius distance
from the shadow's edge.

The stdev to area relationship is well defined (just search for
"Confidence Interval") and can be calculated offline. For a cutoff of
99% transparency, set

stdev = blur_radius / (Sqrt(2) * ierf(1 - (.01 * 2)))
which is approximately
stdev = blur_radius / 2.32635

where ierf is the inverse of the standard error function associated
with the normal distribution. .01 is multiplied by two since we only
care about the value on one side.

I also ran into the same failure with larger blur values, but it ended
up being due to the finite canvas size and the renormalization step in
the test page. With a large enough sigma, one side of the kernel can
be over the white pixels while the other side is cut off by the right
edge of the canvas. When divided by the kernel values used, the white
pixels are given a greater weight than they would have with a larger
canvas, so the blur edge is biased outwards.

** ("long enough" is really important. The 1D simplifications can no
longer be used as soon as a corner gets involved)

Received on Saturday, 17 July 2010 23:39:52 UTC