Re: FORMAL OBJECTION (was RE: Working Group Decision on ISSUE-204 aria-hidden)

On Tue, Aug 14, 2012 at 5:57 PM, Maciej Stachowiak <mjs@apple.com> wrote:
> I believe my remark below is equally applicable to your comment. To my understanding as an implementor, hidden content can be exposed to assistive technologies (including screen readers such as VoiceOver) without exposing it to the TAB cycle for keyboard users. Thus, non-screen-reader users making use of the keyboard will not experience a problem, as they will not tab into invisible content. My understanding of the specific concern raised by John, both in his survey comment and in his formal objection below, is that exposing content with full semantics to assistive technologies will inevitably put it in the tab cycle, thus leading to the harm both of you are worried about, namely sighted keyboard users tabbing into invisible content. The point of information I wanted to provide is that it does not necessarily have this effect, for either screen reader users, or users viewing the screen and navigating with the keyboard.

I think the big disconnect in this conversation may be that assistive
technology often uses accessibility APIs to provide a user experience
that extends and depends upon, rather than simply replacing, the base
user experience provided by the browser. The concern here, I think, is
that if the long description remains visually hidden there is no user
experience to extend.

Consider a person with partial sight using VoiceOver. The object under
the VoiceOver cursor is indicated visually (as well as being spoken or
brailled) using a visual indicator (a black rectangle). If the
VoiceOver cursor descended into a subtree in the accessibility
hierarchy that has no equivalent in the visual layout, what would the
black rectangle be drawn around?

A naive implementation would be for VoiceOver to draw it around the
object described. This has the disadvantage that the user cannot
visually distinguish between focus on different parts of the complex
long description. For example, if the long description contained
multiple controls, you would not be able to tell (visually), which
control would be activated. This would be a major departure from the
general user experience of using mixed aural/braille and video output.

A more sophisticated implementation would be for VoiceOver to draw an
overlay, perhaps similar to the existing Item Chooser, that exposes
the structure, content, and functionality of the long description
visually. I'm guessing this is what Maciej is envisaging.

Different combinations of user agents and assistive technology could
divide responsibility for maintaining visible focus in dfferent. For
example, maybe Internet Explorer might provide an accessibility API
method on the described object to navigate to a hidden long
description. When called by an AT such as JAWS, Internet Explorer
would render an overlay exposing the structure, content, and
functionality of the long description. JAWS would then draw focus
indicators into the overlay rendered by Internet Explorer.

The WG needs to recognise that a lot of different software would need
to be updated for this to work and that many critical vendors aren't
really represented here. For example, NVDA tends to adopt a minimal
approach to extending the base experience. Rendering an overlay itself
would be quite a departure. So I think we're looking at a major
implementation commitment to make it work.

(I think there are other big difficulties here, like the fact that
@aria-describedby isn't suited for pointing to long descriptions at
all since it is used to populate a plain text "accessibility
description" field in accessibility APIs.)

--
Benjamin Hawkes-Lewis

Received on Tuesday, 14 August 2012 19:12:21 UTC