Re: [css3-background] Where we are with Blur value discussion

On Fri, Jul 16, 2010 at 7:29 PM, Brad Kemper <brad.kemper@gmail.com> wrote:
> That's fine if you're an implementor, I suppose, but it's fairly meaningless to the authors who will be using it every day. I want to be able to say, "give me a blur that extends out 40px", and then verify that it has done so. This is a testable result that anyone who can count the pixels can verify. If the only way to verify is by using a complex formula that ordinary authors don't understand (Tab's math-jitsu is extraordinary) or by comparing it to other implementations of other specs,  then that's not good enough.

I disagree.  The implementation requirements don't have to make any
sense at all to authors, as long as a) implementers can implement them
interoperably (preferably in a testable fashion), and b) a simplified
version is available for authors to use.  If the precise rule is
incomprehensible gibberish to authors, but we can tell authors "this
blurs by the given number of pixels", that's completely fine.

What's not fine at all is doing some hand-wavy definition for
implementers that makes it impossible for them to get
*identical*-looking output.  Slightly different approximations are
okay, but ideally, you should be able to superimpose the images and
flip back and forth between them and not perceive a difference.  If we
have to weaken that a bit, okay, but we should not permit renderings
that are clearly different when you glance at them side-by-side,
unless it's really necessary.

> If, on the other hand, I can say, "give me a blur that extends out 40px" by typing '40px' into the appropriate part of the rule, then that is not only intuitive, it is something that the UA can create very precisely with a Gaussian blur or any other kind of blur.

Then it will look different between different browsers.  We should try
to not write specs that allow this unless there's a specific good
reason, like respecting platform conventions or limiting optimization
opportunities.  If browsers use entirely different blur algorithms,
then they will diverge a lot in edge cases at the very least, and
that's not interoperable.  Mandating that browsers should use one
particular *pixel-specific* blur algorithm (or close approximations
thereto) is the correct thing to do, if implementers are okay with it
-- which isn't clear to me at this point.

Received on Monday, 19 July 2010 22:54:05 UTC