- From: Kenneth Russell <kbr@google.com>
- Date: Wed, 13 Apr 2016 16:02:55 -0700
- To: Dominic Mazzoni <dmazzoni@google.com>
- Cc: Justin Novosad <junov@google.com>, public-canvas-api@w3.org, Frank Olivier <Frank.Olivier@microsoft.com>
- Message-ID: <CAMYvS2cok1A+Sk7iDWfz1ywY9fNVaDH1n4+upUz0-DG5r3n6eg@mail.gmail.com>
If a WebGL application knows what it's doing, there's no reason not to add the same support for hit regions to WebGLRenderingContext that CanvasRenderingContext2D has. There are a couple of basic reasons that WebGL behaves differently than the 2D context, though. 1. Vertex shaders can move around geometry on the GPU without the CPU having any knowledge of that fact. Because of this, keeping the CPU-side representation of a pickable region in sync with the GPU side is completely application-specific. It's not worth adding a new complicated type of primitive like a "mesh" to the browser in support of this functionality. 2. Picking mechanisms vary widely in OpenGL (and OpenGL ES, and WebGL) programs. Some programs prefer to do GPU-assisted picking, rendering the scene with a special shader and reading back a single pixel under the cursor. Some keep a representation of the scene on the CPU side and use data structures like octrees to optimize picking underneath the cursor. With these caveats in mind, if adding HitRegion support to WebGLRenderingContext would improve accessibility for some kinds of 3D applications, let's do it! Note: another primitive that would probably help in this domain would be a "getBufferSubDataAsync" method on the WebGL context. If that were added, then single-pixel readbacks could be done completely asynchronously in WebGL 2.0, avoiding graphics pipeline stalls when doing GPU-assisted picking. -Ken On Wed, Apr 13, 2016 at 8:48 AM, Dominic Mazzoni <dmazzoni@google.com> wrote: > On Tue, Apr 12, 2016 at 6:39 PM Justin Novosad <junov@google.com> wrote: > >> On Apr 12, 2016 7:20 PM, "Dominic Mazzoni" <dmazzoni@google.com> wrote: >> > >> > What's the equivalent of a path in WebGL? Is that even a concept that >> exists now or is it something we'd have to define? >> >> The Path2D interface could be used with WebGL, but it is not ideal since >> it restricts the definition of a region to a planar geometry. It would >> probably make more sense to have some sort of 3D mesh primitive to get hit >> regions that map more directly to elements in a WebGL scene. >> > I guess my question is, would a significant fraction of WebGL apps benefit > from an API like this? Some examples of where I'm skeptical: > > * Some apps use large 3-D libraries on top of WebGL. Those libraries > probably already implement hit testing, and trying to extract the right 3-D > mesh and transformation matrix would be more work than just using the > existing function. > * Any complex 3-D scenes with obscured objects would need to expose nearly > everything in the scene in order for hit testing to work correctly, and I'm > afraid it would hurt performance to try to extract all of that information > and do hit testing on the cpu instead of on the gpu. > > The 2D canvas context is a relatively high-level graphics API and it's > natural for authors to already have paths representing objects, so adding > hit testing is a natural extension of the straightforward use of this API. > I'm just not sure that's true for WebGL. > > - Dominic > >
Received on Wednesday, 13 April 2016 23:03:23 UTC