Re: Fw: Request to re-open issue 131 -USE CASES, USE CASES, USE CASES

Hi Jonas,

you wrote:

> I honestly have lost track of what the latest proposal is at this point.

Frank's proposal:

http://www.w3.org/wiki/Canvas_hit_testing

here it is inline:

IMO We need a 'general purpose' hit testing solution here (to assist
in author uptake) with a very simple method that allows authors to see
what path/pixels are actually being set for hit testing:

boolean setElementPath(in Element element);

I would define this as: (Additional spec text for
http://dev.w3.org/html5/canvas-extensions/Overview.html#focus-management-1)

When a canvas is interactive, authors should include focusable
elements in the element's fallback content corresponding to each
focusable part of the canvas.

When multiple focusable elements are added, authors should use
setElementPath() to set the focus ring of each individual focusable
element. If the focus ring is not set with setElementPath(), the focus
ring of a focusable element in the fallback content is the bounding
rectangle of the parent canvas element. [This improves accessibility
for the case where the entire canvas element represents a single
interactive control (think very simple custom-drawn checkbox), and
fallback element click handling is being handled entirely by the
author. [The single checkbox case.]]

When setElementPath() is called, the drawing path is used to form the
focus ring provided that drawing path contains a closed path. The
drawing path is used to form a best fit bounding rectangle in screen
coordinates. The bounding rectangle and drawing path may be used to
enhance accessibility properties [ARIA] for the targeted element.

User agents should use the information set by setElementPath() to
create accessible user experiences. For example, a screen reader may
read the fallback element's details when the user indicates interest
in that region of the canvas.

The setElementPath(element) method, when invoked, must run the
following steps: 1. If the element is not a descendant of the canvas
element with whose context the method is associated, then return false
and abort these steps.

2. If supporting an accessibility API, user agents may use the drawing
path to form a best fit rectangle in screen coordinates and apply it
to the bounding rectangle of the associated accessible object. The
focus ring should be subject to the clipping region.

3. Return true.

When the user interacts with the canvas, the user agent should forward
the event to the fallback element.

If two or more elements have overlapping paths (set via
setElementPath()) the last call to setElementPath() wins.

regards
Stevef

On 20 December 2011 10:22, Jonas Sicking <jonas@sicking.cc> wrote:
> On Mon, Dec 19, 2011 at 1:19 PM, Richard Schwerdtfeger <schwer@us.ibm.com>
> wrote:
>>
>> Jonas,
>>
>> For purposes of agreement getting some consensus I would like to put the
>> text discussion and focus on this use case which you had agreed we should
>> support while at TPAC:
>>
>>
>>
>> 1. Hit Testing and the bounds of an object
>>
>> USE CASE: Regarding hit testing, it is very, very simple. In ALL operating
>> systems that support an accessibility API it is ESSENTIAL that a magnifier
>> be able to determine the location of an accessible object on the screen so
>> that a user may zoom to it. It has absolutely nothing to do with rich text
>> editing other than the fact that like all other objects we would need to
>> find the text box to zoom to it. You and I, who can see, can scan a page and
>> find what we want. Yet, a magnifier user may only be able to see, say a text
>> box, which has focus and a few characters as the screen my be magnified by a
>> factor of 10. The few characters in the text box may be all they see on the
>> screen. So, to zoom to something else they will ask their assistive
>> technology to do things like find an object and zoom to it - or they may ask
>> it to read from the beginning of an application at the first accessible
>> object and maintain a magnification point around the object
>>
>> Unlike HTML accessible canvas object reside in fallback content which is
>> NOT visible. So, the screen location of these objects can NOT be found
>> without programmatic intervention. In ALL accessible GUI OS platforms the
>> bound so the drawing object are acquired from the device context which is
>> mapped ultimately to the drawing object and then to the corresponding
>> accessible object. The screen location is typically the same location used
>> in hit testing.
>>
>> USE CASE: USE Braille devices also use the bounding information to assist
>> in line breaks on Braille displays.
>>
>> How do I know these things? I built the offscreen model for the first GUI
>> screen readers for the PC. I was hip deep in the graphics engine and
>> windowing systems for both OS/2 and Windows. I also worked on one of the
>> first screen magnifiers the PC - Screen Magnifier/2.
>>
>> So, there are your use cases. There is NO invention here and the text
>> editor case is really a red herring as it is not the essential reason why we
>> need the bounds and hit testing.
>>
>> USE CASE: The use case for hit testing is it pushes the load off the
>> author to the user agent. Imagine you having to do all the GUI hit testing
>> manually for your Windows app. Also, now, pointing device handling occurs at
>> the canvas element while the keyboard handling is handled at an element in
>> fallback content.
>>
>> Here is the accessibility API for UNIX Systems that needs the bounds (see
>> BoundingBox) of an object:
>> http://people.gnome.org/~billh/at-spi-idl/html/classAccessibility_1_1Component.html
>> Here is the accessibility API (see accLocation) for MSAA which is used
>> both Chrome and Firefox on Windows:
>> http://msdn.microsoft.com/en-us/library/dd318466.aspx
>> Here it the accessibility API (see Bounding Box) for an UIA provider:
>> http://msdn.microsoft.com/en-us/library/ms726714(v=VS.85).aspx
>>
>> Right now, without a change to canvas we cannot supply this information to
>> assistive technologies.
>
>
> Yes, I definitely support the ability to associate an area of the canvas
> with a element in the sub-dom (sorry, forget what the official name is, if
> there is one) of the canvas element. This will enable things like
> hit-testing, driving screen magnifiers, implementing
> scrolling-to-part-of-canvas, etc.
>
> I apologize if I gave the impression of otherwise.
>
>>
>> Do you support Frank moving forward with the setElementPath/hit test
>> proposal for the working group to review and are you still supportive of
>> having such an API for canvas?
>
>
> I honestly have lost track of what the latest proposal is at this point. The
> main goal I have is to create an API which is simple enough to use for
> people to want to do their own canvas hit-testing using the API we provide.
> That is how we can get the most number of people to use these APIs, and thus
> create the most accessible web.
>
> / Jonas

Received on Tuesday, 20 December 2011 10:35:50 UTC