W3C home > Mailing lists > Public > www-svg@w3.org > November 2013

Re: [filter-effects] resolution dependent filter primitives

From: Jasper van de Gronde <th.v.d.gronde@hccnet.nl>
Date: Wed, 06 Nov 2013 16:33:04 +0100
Message-ID: <527A6130.6050804@hccnet.nl>
To: Dirk Schulze <dschulze@adobe.com>
CC: "www-svg@w3.org" <www-svg@w3.org>
On 2013-11-06 12:39, Dirk Schulze wrote:
> ...
> We are not discussing these primitives that are easy to scale like 
> gaussian blur and most of the other primitives. We are more discussing 
> the two filter primitive groups that can not be scaled in a straight 
> forward way: feConvolveMatrix and feLighting. SVG 1.1 introduced 
> kernelUnitLength to have consistent results across implementations. 
> Doing that introduces a device independent image buffer size that you 
> describe as “grid”. I am just saying that it is hard to align this 
> grid to the actually desired result. Even if you do (by element 
> specific filters), the result will indeed look “blurry” or pixelated. 
> This is noticeable especially on high DPI displays.
Granted, Gaussian blur is almost trivial, so feConvolveMatrix and 
fe*Lighting filters are a /little/ harder to scale, but it is still not 
terribly difficult (and those were the filters I had in mind). The 
question is just how you want to scale them (and I think everyone would 
agree that having a pixelated result is typically not desirable).

For example, suppose someone uses feConvolveMatrix with a horizontal 
matrix approximating a derivative: [-1 0 1]. Now, I can see two 
possibilities for when kernelUnitLength does not match the device 
resolution: either you essentially want the derivative at whatever 
resolution the device uses, or you want to get the best possible 
approximation to the "derivative" at the kernelUnitLength scale. The 
former doesn't really make sense to me, as the author has no control 
over the device resolution. The latter would be fairly easy to 
accomplish in a variety of ways. For example, one could simply use some 
interpolation kernel and use it to build a version of the convolution 
matrix that is matched to the device resolution. Assuming a decent 
kernel is used, this should look fine on a high DPI display. That is, it 
will be a good match for what is displayed on a normal screen, and it 
will be smoother.

At this point you can argue indefinitely about the exact method used to 
resample a kernel (there are some pretty interesting possibilities), and 
you might be tempted to add a flag to signify that you want ultra crisp 
results that will make a browser snap the kernelUnitLength to an integer 
multiple of the device resolution for example. But the general idea 
should be to not even think about buffer sizes or pixels as such; we 
should be thinking about how to define filters for something that is 
inherently a vector format living in a continuous world. Of course at 
that point you can start to wonder about the best way to render such 
things discretely, but that is a completely different story, and I would 
recommend to essentially let implementations experiment with that 
(although some suggestions can always be given of course).

If you're still not convinced, I would welcome a specific example where 
you really do not want to do any kind of resampling (snapped or not). 
And I would also like to know what you /would/ want in that case.

BTW, fe*Lighting should probably simply be rewritten without referring 
to a convolution matrix. In principle kernelUnitLength could still be 
used here if you'd want it to use something like a derivative of 
Gaussian kernel, but in principle there is no reason for fe*Lighting to 
depend on the resolution at all (it is just an artifact of computing the 
normal using discrete convolutions).
Received on Wednesday, 6 November 2013 15:33:39 UTC

This archive was generated by hypermail 2.4.0 : Friday, 17 January 2020 22:54:47 UTC