Re: hit testing and retained graphics

Charles Pritchard:
> > > Developers have told me that they want mouse events delegated
> > > to the shadow dom, based on paths they've bound.

Cameron McCormack:
> > Can you explain this a bit more?  I didn’t quite follow.  Does it mean
> > you want to have it so that
> >
> >   * you associate an element in the <canvas> subtree with a particular
> >     region of the canvas (by giving it a path, say)
> >   * mousedown/mouseup/etc. events get dispatched to the <canvas> subtree
> >     element instead of the <canvas> itself
> >
> > ?

Richard Schwerdtfeger:
> Not wanting to respond for Charles necessarily, but yes. The current,
> appropriate implementation for keyboard events is that they route to the
> canvas subtree. This would allow the author to fully bind the drawing
> object to the same DOM element that would process the keyboard for it. It
> is a lighter weight solution than full retained mode graphics that I was
> proposing earlier.

OK.  The keyboard events are dispatched to the elements in the subtree
because they have focus, is that right?  I guess that doesn’t really
work for mouse events, then, because they will go to whatever element
appears underneath the pointer, which is going to be the <canvas>
element itself.

> Once a path is registered with canvas, the user agent receives mouse events
> and assess where they "hit" within the registered draw paths (that also
> include Z order) and dispatches the events to the <canvas> subtree element.

Right, so this would need special treatment in the <canvas>
implementation.  It feels a bit weird to me, though.  What if a region
the canvas is associated with an element that does not have uniform
behaviour across the whole area of that element?  I can see it working
OK for a <button>, but for a <select multiple>, say, the exact mousedown
position is going to influence what values are going to be selected.

> > So, setElementPath() associates the canvas context’s current path with
> > that element.  Apart from the automatic event dispatching thing you
> > mention above, what effect does this association have?  (Sorry if this
> > is obvious, since I don’t have the context here.)
>
> This would also allow us a user agent to provide the physical bounds of the
> drawing object to the object in the fallback content that supplies
> information to platform accessibility APIs. This is essential for screen
> magnification and for driving Braille devices by a screen reader. Chuck is
> part of the canvas HTML accessibility sub team and we are attempting to
> bind an accessibility requirement to a mainstream need for hit testing.

The magnification case I can understand; you need to know which part of
the canvas is “focussed” so you can zoom in on it.  I’m not sure what
the Braille mode would do with bounds of objects?

So normally, I imagine, hit testing would be done either by using
isPointInPath() or by custom code looking at a mouse event’s x/y values.
I think this proposal doesn’t work with isPointInPath(), though, is that
right?

-- 
Cameron McCormack ≝ http://mcc.id.au/

Received on Thursday, 23 June 2011 02:20:23 UTC