- From: Jonas Sicking <jonas@sicking.cc>
- Date: Tue, 20 Dec 2011 02:22:08 -0800
- To: Richard Schwerdtfeger <schwer@us.ibm.com>
- Cc: chuck@jumis.com, Cynthia Shelly <cyns@microsoft.com>, david bolter <david.bolter@gmail.com>, dbolter@mozilla.com, franko@microsoft.com, Maciej Stachowiak <mjs@apple.com>, Paul Cotton <Paul.Cotton@microsoft.com>, public-canvas-api@w3.org, public-html@w3.org, public-html-a11y@w3.org, Sam Ruby <rubys@intertwingly.net>, Steve Faulkner <faulkner.steve@gmail.com>
- Message-ID: <CA+c2ei_h0mnO4R4SbWUtiQ+Qbezgm4Qe-ijNot53GBGTDkGBpg@mail.gmail.com>
On Mon, Dec 19, 2011 at 1:19 PM, Richard Schwerdtfeger <schwer@us.ibm.com>wrote: > Jonas, > > For purposes of agreement getting some consensus I would like to put the > text discussion and focus on this use case which you had agreed we should > support while at TPAC: > > > 1. Hit Testing and the bounds of an object > > USE CASE: Regarding hit testing, it is very, very simple. In ALL operating > systems that support an accessibility API it is ESSENTIAL that a magnifier > be able to determine the location of an accessible object on the screen so > that a user may zoom to it. It has absolutely nothing to do with rich text > editing other than the fact that like all other objects we would need to > find the text box to zoom to it. You and I, who can see, can scan a page > and find what we want. Yet, a magnifier user may only be able to see, say a > text box, which has focus and a few characters as the screen my be > magnified by a factor of 10. The few characters in the text box may be all > they see on the screen. So, to zoom to something else they will ask their > assistive technology to do things like find an object and zoom to it - or > they may ask it to read from the beginning of an application at the first > accessible object and maintain a magnification point around the object > > Unlike HTML accessible canvas object reside in fallback content which is > NOT visible. So, the screen location of these objects can NOT be found > without programmatic intervention. In ALL accessible GUI OS platforms the > bound so the drawing object are acquired from the device context which is > mapped ultimately to the drawing object and then to the corresponding > accessible object. The screen location is typically the same location used > in hit testing. > > USE CASE: USE Braille devices also use the bounding information to assist > in line breaks on Braille displays. > > How do I know these things? I built the offscreen model for the first GUI > screen readers for the PC. I was hip deep in the graphics engine and > windowing systems for both OS/2 and Windows. I also worked on one of the > first screen magnifiers the PC - Screen Magnifier/2. > > So, there are your use cases. There is NO invention here and the text > editor case is really a red herring as it is not the essential reason why > we need the bounds and hit testing. > > USE CASE: The use case for hit testing is it pushes the load off the > author to the user agent. Imagine you having to do all the GUI hit testing > manually for your Windows app. Also, now, pointing device handling occurs > at the canvas element while the keyboard handling is handled at an element > in fallback content. > > Here is the accessibility API for UNIX Systems that needs the bounds (see > BoundingBox) of an object: > http://people.gnome.org/~billh/at-spi-idl/html/classAccessibility_1_1Component.html > Here is the accessibility API (see accLocation) for MSAA which is used > both Chrome and Firefox on Windows: > http://msdn.microsoft.com/en-us/library/dd318466.aspx > Here it the accessibility API (see Bounding Box) for an UIA provider: > http://msdn.microsoft.com/en-us/library/ms726714(v=VS.85).aspx > > Right now, without a change to canvas we cannot supply this information to > assistive technologies. > Yes, I definitely support the ability to associate an area of the canvas with a element in the sub-dom (sorry, forget what the official name is, if there is one) of the canvas element. This will enable things like hit-testing, driving screen magnifiers, implementing scrolling-to-part-of-canvas, etc. I apologize if I gave the impression of otherwise. > Do you support Frank moving forward with the setElementPath/hit test > proposal for the working group to review and are you still supportive of > having such an API for canvas? > I honestly have lost track of what the latest proposal is at this point. The main goal I have is to create an API which is simple enough to use for people to want to do their own canvas hit-testing using the API we provide. That is how we can get the most number of people to use these APIs, and thus create the most accessible web. / Jonas
Received on Tuesday, 20 December 2011 10:23:30 UTC