W3C home > Mailing lists > Public > public-html@w3.org > June 2011

Re: hit testing and retained graphics

From: Richard Schwerdtfeger <schwer@us.ibm.com>
Date: Thu, 23 Jun 2011 14:56:44 -0500
To: Cameron McCormack <cam@mcc.id.au>
Cc: Charles Pritchard <chuck@jumis.com>, Cynthia Shelly <cyns@microsoft.com>, david.bolter@gmail.com, Frank Olivier <Frank.Olivier@microsoft.com>, Mike@w3.org, public-canvas-api@w3.org, public-html@w3.org, public-html-a11y@w3.org
Message-ID: <OF771333D3.D5B49A3F-ON862578B8.00697937-862578B8.006D9023@us.ibm.com>


Rich Schwerdtfeger
CTO Accessibility Software Group

Cameron McCormack <cam@mcc.id.au> wrote on 06/22/2011 09:19:15 PM:

> From: Cameron McCormack <cam@mcc.id.au>
> To: Richard Schwerdtfeger/Austin/IBM@IBMUS
> Cc: Charles Pritchard <chuck@jumis.com>, Cynthia Shelly
> <cyns@microsoft.com>, david.bolter@gmail.com, Frank Olivier
> <Frank.Olivier@microsoft.com>, Mike@w3.org, public-canvas-
> api@w3.org, public-html@w3.org, public-html-a11y@w3.org
> Date: 06/22/2011 09:20 PM
> Subject: Re: hit testing and retained graphics
>
> Charles Pritchard:
> > > > Developers have told me that they want mouse events delegated
> > > > to the shadow dom, based on paths they've bound.
>
> Cameron McCormack:
> > > Can you explain this a bit more?  I didn’t quite follow.  Does it
mean
> > > you want to have it so that
> > >
> > >   * you associate an element in the <canvas> subtree with a
particular
> > >     region of the canvas (by giving it a path, say)
> > >   * mousedown/mouseup/etc. events get dispatched to the <canvas>
subtree
> > >     element instead of the <canvas> itself
> > >
> > > ?
>
> Richard Schwerdtfeger:
> > Not wanting to respond for Charles necessarily, but yes. The current,
> > appropriate implementation for keyboard events is that they route to
the
> > canvas subtree. This would allow the author to fully bind the drawing
> > object to the same DOM element that would process the keyboard for it.
It
> > is a lighter weight solution than full retained mode graphics that I
was
> > proposing earlier.
>
> OK.  The keyboard events are dispatched to the elements in the subtree
> because they have focus, is that right?  I guess that doesn’t really
> work for mouse events, then, because they will go to whatever element
> appears underneath the pointer, which is going to be the <canvas>
> element itself.
>
> > Once a path is registered with canvas, the user agent receives mouse
events
> > and assess where they "hit" within the registered draw paths (that also
> > include Z order) and dispatches the events to the <canvas> subtree
element.
>
> Right, so this would need special treatment in the <canvas>
> implementation.  It feels a bit weird to me, though.  What if a region
> the canvas is associated with an element that does not have uniform
> behaviour across the whole area of that element?  I can see it working
> OK for a <button>, but for a <select multiple>, say, the exact mousedown
> position is going to influence what values are going to be selected.
>
Yes, so, if the select was dropped you would want to create a bounding path
for the  submenu (possibly also for the menu items) and the mouse down
event would be passed down to the element in the DOM subtree to which it
was targeted by the user agent as a result of binding the path to the
element. Should the element not process the mouse event it can bubble up to
the parent, like it does for HTML today.

> > > So, setElementPath() associates the canvas context’s current path
with
> > > that element.  Apart from the automatic event dispatching thing you
> > > mention above, what effect does this association have?  (Sorry if
this
> > > is obvious, since I don’t have the context here.)
> >
> > This would also allow us a user agent to provide the physical bounds of
the
> > drawing object to the object in the fallback content that supplies
> > information to platform accessibility APIs. This is essential for
screen
> > magnification and for driving Braille devices by a screen reader. Chuck
is
> > part of the canvas HTML accessibility sub team and we are attempting to
> > bind an accessibility requirement to a mainstream need for hit testing.
>
> The magnification case I can understand; you need to know which part of
> the canvas is “focussed” so you can zoom in on it.  I’m not sure what
> the Braille mode would do with bounds of objects?
>
Screen readers use a concept of a "line" to assess elements that are on the
same line in the UI. This information is used to drive a refreshable
Braille display (say 80 characters). Each line either fills the refreshable
display or wraps. Positional and bounding path information are used to
facilitate this. A character in the cell may be an image, text, or some
other drawing object representation.

> So normally, I imagine, hit testing would be done either by using
> isPointInPath() or by custom code looking at a mouse event’s x/y values.
> I think this proposal doesn’t work with isPointInPath(), though, is that
> right?
>
I think it would but we would need to incorporate Z order and a notion of
the last drawn element to compute which element is on top. The user agent
would need to manage this.

> --
> Cameron McCormack ≝ http://mcc.id.au/
Received on Thursday, 23 June 2011 19:57:49 GMT

This archive was generated by hypermail 2.2.0+W3C-0.50 : Wednesday, 9 May 2012 00:17:33 GMT