Fw: Request to re-open issue 131 -USE CASES, USE CASES, USE CASES

Jonas,

For purposes of agreement getting some consensus I would like to put the
text discussion and focus on this use case which you had agreed we should
support while at TPAC:

1. Hit Testing and the bounds of an object

USE CASE: Regarding hit testing, it is very, very simple. In ALL operating
systems that support an accessibility API it is ESSENTIAL that a magnifier
be able to determine the location of an accessible object on the screen so
that a user may zoom to it. It has absolutely nothing to do with rich text
editing other than the fact that like all other objects we would need to
find the text box to zoom to it. You and I, who can see, can scan a page
and find what we want. Yet, a magnifier user may only be able to see, say a
text box, which has focus and a few characters as the screen my be
magnified by a factor of 10. The few characters in the text box may be all
they see on the screen. So, to zoom to something else they will ask their
assistive technology to do things like find an object and zoom to it - or
they may ask it to read from the beginning of an application at the first
accessible object and maintain a magnification point around the object

Unlike HTML accessible canvas object reside in fallback content which is
NOT visible. So, the screen location of these objects can NOT be found
without programmatic intervention. In ALL accessible GUI OS platforms the
bound so the drawing object are acquired from the device context which is
mapped ultimately to the drawing object and then to the corresponding
accessible object. The screen location is typically the same location used
in hit testing.

USE CASE: USE Braille devices also use the bounding information to assist
in line breaks on Braille displays.

How do I know these things? I built the offscreen model for the first GUI
screen readers for the PC. I was hip deep in the graphics engine and
windowing systems for both OS/2 and Windows. I also worked on one of the
first screen magnifiers the PC - Screen Magnifier/2.

So, there are your use cases. There is NO invention here and the text
editor case is really a red herring as it is not the essential reason why
we need the bounds and hit testing.

USE CASE: The use case for hit testing is it pushes the load off the author
to the user agent. Imagine you having to do all the GUI hit testing
manually for your Windows app. Also, now, pointing device handling occurs
at the canvas element while the keyboard handling is handled at an element
in fallback content.

Here is the accessibility API for UNIX Systems that needs the bounds (see
BoundingBox) of an object:
http://people.gnome.org/~billh/at-spi-idl/html/classAccessibility_1_1Component.html
Here is the accessibility API (see accLocation) for MSAA which is used both
Chrome and Firefox on Windows:
http://msdn.microsoft.com/en-us/library/dd318466.aspx
Here it the accessibility API (see Bounding Box) for an UIA provider:
http://msdn.microsoft.com/en-us/library/ms726714(v=VS.85).aspx

Right now, without a change to canvas we cannot supply this information to
assistive technologies.

Do you support Frank moving forward with the setElementPath/hit test
proposal for the working group to review and are you still supportive of
having such an API for canvas?

I would like to have this for Mozilla and Microsoft to present at SXSW even
if it is not implemented yet in the browsers by that time. This is base
accessibility stuff and is needed regardless of any text discussion.

Rich


Do you support this
----- Forwarded by Richard Schwerdtfeger/Austin/IBM on 12/19/2011 02:58 PM
-----

From:	Richard Schwerdtfeger/Austin/IBM@IBMUS
To:	Steve Faulkner <faulkner.steve@gmail.com>,
Cc:	chuck@jumis.com, Cynthia Shelly <cyns@microsoft.com>, david
            bolter <david.bolter@gmail.com>, dbolter@mozilla.com,
            franko@microsoft.com, Jonas Sicking <jonas@sicking.cc>, Maciej
            Stachowiak <mjs@apple.com>, Paul Cotton
            <Paul.Cotton@microsoft.com>, public-canvas-api@w3.org,
            public-html@w3.org, public-html-a11y@w3.org, Sam Ruby
            <rubys@intertwingly.net>
Date:	12/18/2011 10:27 AM
Subject:	Re: Request to re-open issue 131 -USE CASES, USE CASES, USE
            CASES



Folks (Steve, I am back temporarily and would like to elaborate),

HERE ARE THE USE CASES - AGAIN AND IN ONE SPOT

1. Hit Testing and the bounds of an object

USE CASE: Regarding hit testing, it is very, very simple. In ALL operating
systems that support an accessibility API it is ESSENTIAL that a magnifier
be able to determine the location of an accessible object on the screen so
that a user may zoom to it. It has absolutely nothing to do with rich text
editing other than the fact that like all other objects we would need to
find the text box to zoom to it. You and I, who can see, can scan a page
and find what we want. Yet, a magnifier user may only be able to see, say a
text box, which has focus and a few characters as the screen my be
magnified by a factor of 10. The few characters in the text box may be all
they see on the screen. So, to zoom to something else they will ask their
assistive technology to do things like find an object and zoom to it - or
they may ask it to read from the beginning of an application at the first
accessible object and maintain a magnification point around the object

Unlike HTML accessible canvas object reside in fallback content which is
NOT visible. So, the screen location of these objects can NOT be found
without programmatic intervention. In ALL accessible GUI OS platforms the
bound so the drawing object are acquired from the device context which is
mapped ultimately to the drawing object and then to the corresponding
accessible object. The screen location is typically the same location used
in hit testing.

USE CASE: USE Braille devices also use the bounding information to assist
in line breaks on Braille displays.

How do I know these things? I built the offscreen model for the first GUI
screen readers for the PC. I was hip deep in the graphics engine and
windowing systems for both OS/2 and Windows. I also worked on one of the
first screen magnifiers the PC - Screen Magnifier/2.

So, there are your use cases. There is NO invention here and the text
editor case is really a red herring as it is not the essential reason why
we need the bounds and hit testing.

USE CASE: The use case for hit testing is it pushes the load off the author
to the user agent. Imagine you having to do all the GUI hit testing
manually for your Windows app. Also, now, pointing device handling occurs
at the canvas element while the keyboard handling is handled at an element
in fallback content.

2. Caret and Selection Tracking

USE CASE: If you are a magnifier you must be able to follow the location on
the screen where you are typing a piece of text or you are pointing to
select content. Remember, a magnifier's view of the screen may be VERY
SMALL. The magnifier needs to follow along as you type. That is why we
submitted the change request before and why it was approved and why I had
agreement from David Bolter, Microsoft, Steve Faulkner, etc. to submit the
first proposal that was accepted.

USE CASE: Regardless of whether you are doing rich text or not canvas
supports the ability to draw text on the screen. If you are creating a
drawing object you will want the user to give it a label. To do that you
have to provide them the ability to enter text. The user experience would
be dreadful if you had to launch an HTML dialog box to enter it so authors
will want to be able enter text using canvas for this basic purpose. The
magnifier MUST be able to follow along.

USE CASE: Expanding on the above, people will want to select text at times
to replace text with new text on canvas even if it means pointing,
highlighting as you drag your finger over the text, and typing over the
text (we have no clipboard support in canvas).

3.  USE CASE for text baseline: As Steve indicated, the use cases for text
baseline are in my change proposal. I will not repeat them here.

4. USE CASE for exposing a caret blink rate:

OS platforms allow the configuration of a blink rate by a user. User's
configure blink rates to avoid epileptic seizures. The blinking problem is
not limited to text carets. We need to expose this information so that a
canvas author can avoid having a problem.

Does anyone not understand these use cases?

NOTE: Nobody in the canvas accessibility subteam is a proponent of using
canvas for rich text editing (especially me), yet for these basic text
editing support and the fact that not everyone wants to produce an
internationalized application (NO IME) we need these things. Also, just
because we don't want something does not mean authors will not do it. If we
do not provide the basic infrastructure to support things like basic text
input support on canvas. You and I will be able to grab and use the
application on the Web and there will be no recourse for the author to meet
the needs of someone less fortunate. This author may not sell his
application to the government and may in fact provide it for free on the
Web because they think it is cool. So, by not providing the tools to the
author to make it accessible, we who have designed the Canvas 2D API spec.,
are partially responsible for excluding this person from access. It is also
unfair to use people with disabilities as a tool to get authors to code the
way we would like them to code their web page. At the end of the day that
is a losing strategy. There are millions of web pages out there that have
missing alt text even though the HTML DTD required the attribute.

Rich


Inactive hide details for Steve Faulkner ---12/18/2011 04:16:36 AM---Hi
Jonas, thanks for your reply.Steve Faulkner ---12/18/2011 04:16:36 AM---Hi
Jonas, thanks for your reply.

From: Steve Faulkner <faulkner.steve@gmail.com>
To: Jonas Sicking <jonas@sicking.cc>,
Cc: Sam Ruby <rubys@intertwingly.net>, david bolter
<david.bolter@gmail.com>, Richard Schwerdtfeger/Austin/IBM@IBMUS,
chuck@jumis.com, Cynthia Shelly <cyns@microsoft.com>, dbolter@mozilla.com,
franko@microsoft.com, Maciej Stachowiak <mjs@apple.com>, Paul Cotton
<Paul.Cotton@microsoft.com>, public-canvas-api@w3.org, public-html@w3.org,
public-html-a11y@w3.org
Date: 12/18/2011 04:16 AM
Subject: Re: Request to re-open issue 131



Hi Jonas,

thanks for your reply.

you wrote:
> For any feature that we introduce to the web platform we need to have
> use cases.

Laura Carlson collected information on the use cases:
http://www.w3.org/WAI/PF/HTML/wiki/Canvas_Accessibility_Use_Cases


you wrote:

>However even for hit testing and focus management, maybe we would
>design the API differently if we weren't trying to use them to built
>text editors. I really don't know enough about accessibility to fully
>answer that.

For hit testing, example uses and background is provided by  Frank
Olivier in the  doc i pointed to:
http://www.w3.org/wiki/Canvas_hit_testing

He has also publicly stated that building text editors in canvas is a
'fools errand':
http://lists.w3.org/Archives/Public/public-html/2011Nov/0210

So I think it is reasonable to assume that Franks hit testing proposal
is not motivated by the desire to build text editors in canvas.

For text baseline the rationale is outlined in the change proposal:
Modify existing Canvas 2D API to expose text baseline and facilitate
drawing of focus rings
http://www.w3.org/html/wg/wiki/ChangeProposals/FocusRingTextBaseline

in essence:
"to facilitate drawing of focus rings around text to support screen
magnifier users"

Is this minor addition to the canvas text drawing methods proposed to
facilitate the building of text editors in canvas? I think not, but
clarification from Rich/Charles would be useful.


For Focus Management the details are in the change proposal
Modify existing Canvas 2D API caret and focus ring support to drive
screen magnification
http://www.w3.org/html/wg/wiki/ChangeProposals/CaretSelection

Are the methods proposed designed to facilitate the building of text
editors in canvas? I think not, but clarification from Rich/Charles
would be useful.

regards
Stevef

On 16 December 2011 23:45, Jonas Sicking <jonas@sicking.cc> wrote:
> On Fri, Dec 16, 2011 at 3:25 AM, Steve Faulkner
> <faulkner.steve@gmail.com> wrote:
>> Hi Jonas
>>
>> you wrote:
>> "I am personally not at all interested in implementing APIs that are
>> there solely for building text editors in canvas."
>>
>> Of the proposed APIs which do you consider are solely for building text
editors?
>>
>> focus management
>>
http://dev.w3.org/html5/canvas-extensions/Overview.html#focus-management-1
>> caret and selection management
>>
http://dev.w3.org/html5/canvas-extensions/Overview.html#caret-and-selection-management

>> extensions to text metrics
>>
http://dev.w3.org/html5/canvas-extensions/Overview.html#extension-to-the-textmetrics-interface

>> hit testing
>> http://www.w3.org/wiki/Canvas_hit_testing
>
> The question can easily be answered using the age old saying: "what is
> the use case".
>
> For any feature that we introduce to the web platform we need to have
> use cases. If the only use case we can come up with are ones to
> implement text editors, then it would seem like an API "solely for
> building text editors".
>
> Based on that, it would seem like at least hit testing and focus
> management has other use cases. The rest of the APIs I don't know well
> enough to answer.
>
> However even for hit testing and focus management, maybe we would
> design the API differently if we weren't trying to use them to built
> text editors. I really don't know enough about accessibility to fully
> answer that.
>
> / Jonas



--
with regards

Steve Faulkner
Technical Director - TPG

www.paciellogroup.com | www.HTML5accessibility.com |
www.twitter.com/stevefaulkner
HTML5: Techniques for providing useful text alternatives -
dev.w3.org/html5/alt-techniques/
Web Accessibility Toolbar -
www.paciellogroup.com/resources/wat-ie-about.html

Received on Monday, 19 December 2011 21:20:35 UTC