W3C home > Mailing lists > Public > public-html-a11y@w3.org > August 2012

Re: Implementation Details request on Issue 204 Decision (was RE: FORMAL OBJECTION (was RE: Working Group Decision on ISSUE-204 aria-hidden))

From: Maciej Stachowiak <mjs@apple.com>
Date: Tue, 21 Aug 2012 01:52:49 -0700
Cc: public-html@w3.org, 'HTML Accessibility Task Force' <public-html-a11y@w3.org>
Message-id: <0DFD71B5-AF3A-46C9-BB53-2F9FB5016485@apple.com>
To: John Foliot <john@foliot.ca>

I'll do my best to answer your questions, you are correct that they're indeed new questions.

On Aug 20, 2012, at 11:34 PM, John Foliot <john@foliot.ca> wrote:

> 
> In your response above, you state both that aria-describedby and other aria
> properties and values are only exposed to output-side Assistive
> Technologies, but then go on to state that when VoiceOver is activated, the
> screen will display whatever VoiceOver speaks aloud by default. If I am to
> understand this correctly then, toggling VoiceOver "on" (enabled) will
> invoke a visual change to the page(s) viewed (at least in Safari) if/when
> required; ie: an aria-label value (<div aria-label="wonder widget"> for
> example) will be displayed on screen when using VoiceOver, even if it is not
> normally visible to users NOT using VoiceOver.

This is roughly accurate. However, to be more specific, there is normally no change to the display of the page itself; what VoiceOver speaks is displayed as a distinct overlay, which shows both content and structural messages. For example, the markup you describe would say "wonder widget container" and would show that in the overlay area.

> 
>> If a blind user has VoiceOver enabled,
>> a sighted user can stand near them and see everything the VoiceOver
>> user hears.  
> 
> If this is indeed the case, then are we also assume that using the new
> technique delivered via the current Issue 204 decision, that
> Safari+VoiceOver will also toggle the @hidden state from "true" to "false"
> (setting aside for the immediate moment that this is not how the Boolean
> states of @hidden are expressed in HTML5)? I ask this, as how else would the
> semantic structures of the aria-describedby but @hidden content be visually
> rendered so that "VoiceOver displays everything it speaks on the screen by
> default."?

We may choose to display a rendered version of the content specifically when the user actuates a description (which does not happen in normal VoiceOver navigation, it has to be specially requested). However, even if we did not, VoiceOver is able to read through a set of structured content and then let you navigate. There is a visual display even if nothing special showed up on the page.

> 
>> This is true
>> today, and likely would remain true if we exposed full semantics, not
>> just flattened output.
> 
> Assuming that this is indeed how things would work for Safari+VoiceOver
> (where Apple has the luxury of the tighter binding of the two "native"
> applications on your devices), has there been any discussion or thought on
> how this might also work with other tools, including other browsers and
> other, 3rd Party screen reading tools such as JAWs or NVDA (or Orca,
> Hal/SuperNova, ZoomText, etc.)?  

So far as I know, none of these products run on Mac OS X or iOS, so we don't really design for them. That being said, anyone is free to make a third party product that works with the accessibility APIs on Mac OS X. Everything that VoiceOver uses is available to third-party tools.

We do make sure that WebKit in iTunes for Windows can successfully present the Web content in iTunes via JAWS and Windows-Eyes, but that is canned content, so we only test the restricted subset that 

> 
> Currently, I am unaware of any 3rd party AT tool that 'advertises' its
> presence to web browsers. In fact, I know that at least regarding JAWs,
> Freedom Scientific is fairly adamant that their tool not be "discoverable"
> in such a fashion, citing user privacy concerns: at issue is that some users
> of these tools may not want to announce publicly that they are visually
> impaired (or even simply that they are using a screen reader). As such, this
> has been seen as a problem for many years by some web developers, who have
> wanted to have this ability, so that they could then 'craft' an alternative
> experience for the non-sighted user. I realize that this is likely outside
> of the scope of what you can say (as it involves reactions and comments from
> outside of Apple), but do you know if Jonas or Matt or any of the other
> supporters of the current decision have any feedback here? Are you or anyone
> else aware of any discussions with these 3rd party vendors to see how they
> will work in cooperation with browsers to afford this toggling capacity?

I don't know anything about what other browser vendors or AT vendors think on this.

In the case of Safari, we don't know what particular AT is running, but we do know when support for assistive technologies is enabled, and we know when a description is requested by the user. To be clear, we do not special-case voiceover, we just use the OS X accessibility APIs as designed. 

> (I ask this in light of the "Exit CR" discussions, as without at least one
> other full public implementation of this technique, I suspect that the Issue
> 204 decision would become a Feature at Risk moving forward).

I think the "encourage" statement, because it is completely optional and has no conformance criteria, would likely not count as a feature. But if the WG does decide to treat it as a feature at risk, then you are correct about what might happen.

> 
> Returning to how you envision this might work in Safari+VoiceOver, I am also
> curious about a few other things. 
> 
> For one, how *will* the toggling/over-riding of @hidden be handled? Will
> VoiceOver invoke a re-writing of the DOM tree to remove any and all
> instances of @hidden, or do you have something else in mind? Since the
> previously hidden but now exposed semantically rich will be rendered on
> screen, will this content also support CSS style properties? (I would think
> they would, but am unsure). What will happen to the page layout when a large
> block of new, semantically rich content is rendered on screen? How/where
> will it render? I would presume that the content would render in the logical
> relative position afforded by the DOM order (but I also suppose that the
> block, which when visible and supporting CSS, could be placed anywhere using
> CSS - *IF* CSS is supported), but a reading of the currently accepted Change
> proposal seems to be silent on specifics of how this will work.

I think you are making inaccurate assumptions about how it would likely work. There would be visual display only when an aria-describedby description is explicitly requested. When that happens, there is currently visual display of the flattened spoken text, via the usual VoiceOver mechanism of a distinctive visual overlay. If we exposed content pointed to by aria-describedby with full semantics, we might make a visual overlay that shows the rendered content specifically for this purpose. That would depend on whether it seems beneficial on the whole.

> 
> To aid in the testing and explanation of all of these questions, I have
> taken the liberty of posting a test-suite page at
> http://john.foliot.ca/html5/w3c/longer_descriptions/ 
> 
> In the first example of aria-describedby + @hidden and inline longer textual
> descriptions
> (http://john.foliot.ca/html5/w3c/longer_descriptions/index.html#option1),
> will invoking VoiceOver push the sentence preceding the info-graphic 'down'
> and then render the semantic content below the image (something like how
> <details> is envisioned to work?) Or do you envision something different?

If you go here <http://www.apple.com/accessibility/voiceover/> and watch the videos (particularly "Visuals of Collaboration"), you will see what VoiceOver's default visual display looks like, including various customization options. I would expect that at minimum, the standard visual display continues to work. Note that structure of structured content is already reflected in the VoiceOver display panel, as words. Going beyond this, we could consider a floating overlay on top of the page, showing the aria-describedby content rendered, when the user activates the long description. Whether we did something like that would depend on input from the company's accessibility experts. I do not expect that we would alter the rendering of the page itself as a result of accessing a long description or simply having a screen reader enabled.

> Maciej, I thank you in advance for any further feedback you can provide. I
> also welcome others to answer the current questions I pose, as I appreciate
> that some of the questions are certainly outside of the scope of Apple's
> product offerings.

Hope this helps.

Regards,
Maciej
Received on Tuesday, 21 August 2012 08:52:50 GMT

This archive was generated by hypermail 2.2.0+W3C-0.50 : Tuesday, 21 August 2012 08:52:50 GMT