- From: Gregg Tavares <gman@google.com>
- Date: Wed, 20 Mar 2013 12:22:40 -0700
- To: Rik Cabanier <cabanier@gmail.com>
- Cc: Dirk Schulze <dschulze@adobe.com>, "Tab Atkins Jr." <jackalmage@gmail.com>, Stephen White <senorblanco@chromium.org>, "public-fx@w3.org" <public-fx@w3.org>
- Message-ID: <CAKZ+BNo8FW7h2qL-Y_DgGxR=ETSWArVtqPGxQ=58xyMYdarqWw@mail.gmail.com>
On Mon, Mar 18, 2013 at 3:24 PM, Rik Cabanier <cabanier@gmail.com> wrote: > > > On Fri, Mar 15, 2013 at 2:40 PM, Gregg Tavares <gman@google.com> wrote: > >> >> >> >> On Fri, Mar 15, 2013 at 2:33 PM, Dirk Schulze <dschulze@adobe.com> wrote: >> >>> >>> On Mar 15, 2013, at 1:50 PM, Gregg Tavares <gman@google.com> wrote: >>> >>> > This brings up an issue that I don't *think* is addressed by the >>> current custom filters proposal? >>> > >>> > In custom filters, assuming they use GLSL, there are global values >>> available. For example >>> > >>> > gl_FragCoord >>> > >>> > that are in device coordinates. If we want CSS custom filters to be >>> device independent the spec will probably need to mention that shaders >>> using gl_FragCoord are disallowed or else that implementations must >>> re-write the shader so that gl_FragCoord is in CSS pixels and not device >>> pixels. >>> > >>> >>> gl_FragCoord must be in device coordinates to provide access to the >>> texture data. For shaders it doesn't makes sense that the pixel information >>> ar win CSS coordinates. Ditto for the textureSize uniform which represents >>> the actual texture size. I added a description to the spec to clarify this >>> fact. >>> >> >> The problem you'll have is the texture size will be different on an >> HD-DPI display vs a LO-DPI display and effects like generating a >> checkerboard from gl_FragCoord will generate the wrong size checkerboard. >> > > That is assuming that you don't want the checkerboard to always be in > device pixels (ie the 'transparency' checkerboard pattern in Photoshop does > not scale so this is something that you could use a custom device-dependent > shader for) > If an author wants device independent custom filters, he should write them > using relative coordinates. > If the author desires a device dependent rendering, he can use > "gl_FragCoord". "u_devicePixelRatio" could then be used to scale the > output of the shader so it looks the same on low and high DPI devices, > I'm assuming that introducing device dependencies into CSS is a bad idea. Devs will write some custom filter, expect it works everywhere, then have to test on every different device on every browser to see if it worked. That's not true in any other Web API I know of. > > >> >>> > >>> > >>> > >>> > On Fri, Mar 15, 2013 at 12:54 PM, Tab Atkins Jr. <jackalmage@gmail.com> >>> wrote: >>> > On Fri, Mar 15, 2013 at 11:59 AM, Stephen White >>> > <senorblanco@chromium.org> wrote: >>> > > In particular, in Chrome's accelerated implementation, on a high-DPI >>> > > display, we get high-DPI input images from the compositor. Right >>> now, we >>> > > filter the high-DPI image by the original (unscaled) parameter >>> values, >>> > > which, for the filters whose pixel's result depends on more than a >>> single >>> > > input pixel value (e.g., blur(), drop-shadow()), results in less >>> blurring >>> > > than would be visible on a non-HighDPI display. This seems wrong. >>> (Last >>> > > time I checked, the non-composited path was downsampling the input >>> > > primitive, giving a non-high-DPI result but correct amounts of blur, >>> > > although that may have been fixed). >>> > >>> > This is a bug in our implementation, then. The values in the >>> > functions are CSS values, so a length of "5px" means 5 CSS pixels, not >>> > 5 hardware pixels. The browser has to scale that to whatever internal >>> > notion of "pixel" it's using. >>> > >>> > > For blur() and drop-shadow(), It would be straightforward to scale >>> the >>> > > parameter values by the devicePixelRatio automatically, and achieve >>> the >>> > > correct amount of blurring without affecting the resolution of the >>> result. >>> > > Of course, we could downsample the input primitive for all filters, >>> but that >>> > > would lose the high DPI even for those filters which are unaffected >>> by this >>> > > problem, e.g., brightness() etc. >>> > > >>> > > However, for the reference filters, in particular feConvolveMatrix, >>> it's not >>> > > clear what the optimal behaviour is. It's tempting to simply >>> multiply the >>> > > kernelUnitLength by the devicePixelRatio, and apply the convolution >>> as >>> > > normal. However, that also loses high DPI, and incurs the cost of a >>> > > downsample where it otherwise wouldn't be required (also note that >>> > > kernelUnitLength seems to be unimplemented in WebKit, but that's our >>> > > problem). Would it be a possibility to simply upsample the kernel by >>> > > devicePixelRatio instead, and apply that kernel to the original >>> unscaled >>> > > image? (Or perhaps size' = (size - 1) * devicePixelRatio + 1 for >>> odd >>> > > kernel sizes?) This would result in a similar effect range, while >>> > > preserving the resolution of the source image. >>> > > >>> > > I have no idea if the convolution math is really correct this way, >>> though. >>> > > I'm guessing not, since if it was, presumably the spec would have >>> allowed >>> > > its use for kernelUnitLength application in general. >>> > >>> > I'm not sufficiently familiar with feConvolveMatrix to know how to >>> > handle it well. However, if you get a substantially different result >>> > (beyond rendering/scaling artifacts), the implementation is definitely >>> > wrong in some way. None of SVG or CSS should require knowledge of the >>> > device's DPI. >>> > >>> > ~TJ >>> > >>> > >>> >>> >> >
Received on Wednesday, 20 March 2013 19:23:08 UTC