- From: Gregg Tavares <gman@google.com>
- Date: Fri, 15 Mar 2013 15:26:12 -0700
- To: Dirk Schulze <dschulze@adobe.com>
- Cc: "Tab Atkins Jr." <jackalmage@gmail.com>, Stephen White <senorblanco@chromium.org>, "public-fx@w3.org" <public-fx@w3.org>
- Message-ID: <CAKZ+BNpb-G2F=7eDbQ8T9BAffWmCmYLVipgG7dgcxHkuX36eqQ@mail.gmail.com>
On Fri, Mar 15, 2013 at 3:09 PM, Dirk Schulze <dschulze@adobe.com> wrote: > > On Mar 15, 2013, at 2:40 PM, Gregg Tavares <gman@google.com> wrote: > > > > > > > > > On Fri, Mar 15, 2013 at 2:33 PM, Dirk Schulze <dschulze@adobe.com> > wrote: > > > > On Mar 15, 2013, at 1:50 PM, Gregg Tavares <gman@google.com> wrote: > > > > > This brings up an issue that I don't *think* is addressed by the > current custom filters proposal? > > > > > > In custom filters, assuming they use GLSL, there are global values > available. For example > > > > > > gl_FragCoord > > > > > > that are in device coordinates. If we want CSS custom filters to be > device independent the spec will probably need to mention that shaders > using gl_FragCoord are disallowed or else that implementations must > re-write the shader so that gl_FragCoord is in CSS pixels and not device > pixels. > > > > > > > gl_FragCoord must be in device coordinates to provide access to the > texture data. For shaders it doesn't makes sense that the pixel information > ar win CSS coordinates. Ditto for the textureSize uniform which represents > the actual texture size. I added a description to the spec to clarify this > fact. > > > > The problem you'll have is the texture size will be different on an > HD-DPI display vs a LO-DPI display and effects like generating a > checkerboard from gl_FragCoord will generate the wrong size checkerboard. > > Since the author has the texture size it should be no problem at all to > normalize all relevant data. > It's not about being possible. It's about being portable. I make a texture 256x256. I make a shader that goes bool check = mod(floor(gl_FragCoord.x / 128.0) + floor(gl_FragCoord.y / 128.0), 2.0) < 1.0; gl_FragColor = check ? vec4(1,0,0,1) : vec4(0,1,0,1); On an HD-DPI machine that texture is magically made some other size. For example 512x512. Because if that I get a 4x4 checkerboard instead of the 2x2 checkerboard I wanted. If gl_FragCoord was in CSS pixels, the same units I specified the size of the texture, then the code is magically portable. > Greetings, > Dirk > > > > > > > Greetings, > > Dirk > > > > > > > > > > > > > > On Fri, Mar 15, 2013 at 12:54 PM, Tab Atkins Jr. <jackalmage@gmail.com> > wrote: > > > On Fri, Mar 15, 2013 at 11:59 AM, Stephen White > > > <senorblanco@chromium.org> wrote: > > > > In particular, in Chrome's accelerated implementation, on a high-DPI > > > > display, we get high-DPI input images from the compositor. Right > now, we > > > > filter the high-DPI image by the original (unscaled) parameter > values, > > > > which, for the filters whose pixel's result depends on more than a > single > > > > input pixel value (e.g., blur(), drop-shadow()), results in less > blurring > > > > than would be visible on a non-HighDPI display. This seems wrong. > (Last > > > > time I checked, the non-composited path was downsampling the input > > > > primitive, giving a non-high-DPI result but correct amounts of blur, > > > > although that may have been fixed). > > > > > > This is a bug in our implementation, then. The values in the > > > functions are CSS values, so a length of "5px" means 5 CSS pixels, not > > > 5 hardware pixels. The browser has to scale that to whatever internal > > > notion of "pixel" it's using. > > > > > > > For blur() and drop-shadow(), It would be straightforward to scale > the > > > > parameter values by the devicePixelRatio automatically, and achieve > the > > > > correct amount of blurring without affecting the resolution of the > result. > > > > Of course, we could downsample the input primitive for all filters, > but that > > > > would lose the high DPI even for those filters which are unaffected > by this > > > > problem, e.g., brightness() etc. > > > > > > > > However, for the reference filters, in particular feConvolveMatrix, > it's not > > > > clear what the optimal behaviour is. It's tempting to simply > multiply the > > > > kernelUnitLength by the devicePixelRatio, and apply the convolution > as > > > > normal. However, that also loses high DPI, and incurs the cost of a > > > > downsample where it otherwise wouldn't be required (also note that > > > > kernelUnitLength seems to be unimplemented in WebKit, but that's our > > > > problem). Would it be a possibility to simply upsample the kernel by > > > > devicePixelRatio instead, and apply that kernel to the original > unscaled > > > > image? (Or perhaps size' = (size - 1) * devicePixelRatio + 1 for > odd > > > > kernel sizes?) This would result in a similar effect range, while > > > > preserving the resolution of the source image. > > > > > > > > I have no idea if the convolution math is really correct this way, > though. > > > > I'm guessing not, since if it was, presumably the spec would have > allowed > > > > its use for kernelUnitLength application in general. > > > > > > I'm not sufficiently familiar with feConvolveMatrix to know how to > > > handle it well. However, if you get a substantially different result > > > (beyond rendering/scaling artifacts), the implementation is definitely > > > wrong in some way. None of SVG or CSS should require knowledge of the > > > device's DPI. > > > > > > ~TJ > > > > > > > > > > > >
Received on Friday, 15 March 2013 22:26:40 UTC