- From: Chris Marrin <cmarrin@apple.com>
- Date: Thu, 06 Oct 2011 09:44:47 -0700
- To: robert@ocallahan.org
- Cc: Gregg Tavares <gman@google.com>, Dean Jackson <dino@apple.com>, www-style@w3.org
On Oct 5, 2011, at 9:27 PM, Robert O'Callahan wrote: > On Thu, Oct 6, 2011 at 11:10 AM, Chris Marrin <cmarrin@apple.com> wrote: > We do that by reverse mapping the mouse positions from global into local coordinates. That's easy with a single (or composite) transform. It's impossible for pixels that are altered by a vertex or fragment shader. You'd essentially have to run the shaders backwards to get them to tell you where a given point is in the local coordinate space. I know of no hardware that can do that. I haven't seen it anyway... > > I think we can give up on having event handling account for the geometric effects of fragment shaders. We should encourage authors to change geometry using vertex shaders instead. > > In principle, we can have event handling account for the effects of a vertex shader. Once you've computed the transformed mesh, you can figure out which point on the transformed mesh is topmost under the event position, and map that point back to the pre-transform mesh. But imagine an author who wants to use a very dense mesh to get smooth curves to closely match the curve effect you can get from a fragment shader. Mapping mouse positions with such a mesh would require thousands of matrix operations, including inverse which is very expensive. Especially on mobile hardware, this would be too expensive to be practical. When native apps need to do this sort of thing they often will construct a second, simpler set of geometry which animates along with the finer mesh and is used for things like picking and collision detection. We could do something like that, but it would probably be hard to come up with the right heuristics. And it would still require running the shader in software, which would be quite a big impediment to many implementations. > > This would be difficult to implement unless you can run the vertex shader on the CPU (or you do some hairy CUDA/OpenCL implementation). However, there is another very good reason to be able to run the vertex shader outside the rendering pipeline: automatic calculation of filter bounds, so we can get rid of that horrible filter-margin stuff. Vincent says that the meshes used in the Adobe demos are usually not very large, so this may be feasible. (And automatically lowering the mesh resolution if necessary is also feasible.) > > This may mean that we should choose a different, simpler language for vertex shaders than full WebGL vertex shaders. I think it would be a huge mistake to blaze the trail here. Designing a shading language is not trivial and WebGL has done a lot of work on security with the current shader model. I don't think we should try to tackle all that. I think for now it would be best to not deal with the picking issue and to hang on to filter-margin to deal with bounds issues. ----- ~Chris cmarrin@apple.com
Received on Thursday, 6 October 2011 16:45:43 UTC