W3C home > Mailing lists > Public > www-style@w3.org > October 2011

Re: [css-shaders] CSS shaders for custom filters (ACTION-3072)

From: Charles Pritchard <chuck@jumis.com>
Date: Wed, 5 Oct 2011 22:27:05 -0700
Message-Id: <2523AE1B-1E40-4DB4-889F-1D5A4539B515@jumis.com>
Cc: "robert@ocallahan.org" <robert@ocallahan.org>, Chris Marrin <cmarrin@apple.com>, Gregg Tavares <gman@google.com>, Dean Jackson <dino@apple.com>, "www-style@w3.org" <www-style@w3.org>
To: Rik Cabanier <cabanier@gmail.com>
Top-post. 

It'd work well on the established case of a simple box, it'd work poorly but sufficiently on a simple warp where the x/y and the x/y+width/height calculations are applied. It'd work well where author intent via scripting is defined. It's a mess, but definable.

The goal is making it definable. Making it automatic is going a bit too far. My motivation is toward Letting authors know that it can be automatic in some cases and letting browser vendors know that it must be author defined in other cases.




On Oct 5, 2011, at 9:51 PM, Rik Cabanier <cabanier@gmail.com> wrote:

> How would running a vertex shader do automatic bounds calculation?
> I can image this working if you're only deforming your plane, but not when your doing pixel operations such as blurring that make your content larger.
> 
> I agree that if the goal is reliable hit testing, geometry changes should only happen through vertex shaders.
> If hit testing is enabled on elements inside the shaders, the browser could run the mesh through a software implementation of GLSL and then calculate the position by doing raytracing. This could be very CPU intensive...
> 
> Rik
> 
> On Wed, Oct 5, 2011 at 9:27 PM, Robert O'Callahan <robert@ocallahan.org> wrote:
> On Thu, Oct 6, 2011 at 11:10 AM, Chris Marrin <cmarrin@apple.com> wrote:
> We do that by reverse mapping the mouse positions from global into local coordinates. That's easy with a single (or composite) transform. It's impossible for pixels that are altered by a vertex or fragment shader. You'd essentially have to run the shaders backwards to get them to tell you where a given point is in the local coordinate space. I know of no hardware that can do that. I haven't seen it anyway...
>  
> I think we can give up on having event handling account for the geometric effects of fragment shaders. We should encourage authors to change geometry using vertex shaders instead.
> 
> In principle, we can have event handling account for the effects of a vertex shader. Once you've computed the transformed mesh, you can figure out which point on the transformed mesh is topmost under the event position, and map that point back to the pre-transform mesh.
> 
> This would be difficult to implement unless you can run the vertex shader on the CPU (or you do some hairy CUDA/OpenCL implementation). However, there is another very good reason to be able to run the vertex shader outside the rendering pipeline: automatic calculation of filter bounds, so we can get rid of that horrible filter-margin stuff. Vincent says that the meshes used in the Adobe demos are usually not very large, so this may be feasible. (And automatically lowering the mesh resolution if necessary is also feasible.)
> 
> This may mean that we should choose a different, simpler language for vertex shaders than full WebGL vertex shaders.
> 
> 
> Rob
> -- 
> "If we claim to be without sin, we deceive ourselves and the truth is not in us. If we confess our sins, he is faithful and just and will forgive us our sins and purify us from all unrighteousness. If we claim we have not sinned, we make him out to be a liar and his word is not in us." [1 John 1:8-10]
> 
Received on Thursday, 6 October 2011 05:27:36 GMT

This archive was generated by hypermail 2.3.1 : Tuesday, 26 March 2013 17:20:45 GMT