W3C home > Mailing lists > Public > public-html@w3.org > February 2012

Re: Using @aria-describedby for long described image links [Was: Using an image map for long described image links [Was: Revert Request]]

From: Steve Faulkner <faulkner.steve@gmail.com>
Date: Thu, 2 Feb 2012 10:39:07 +0000
Message-ID: <CA+ri+VmnkHDriX7AOMcqp70qaeTd4kZoUekzCex4h-W+GBbAkg@mail.gmail.com>
To: Benjamin Hawkes-Lewis <bhawkeslewis@googlemail.com>
Cc: John Foliot <john@foliot.ca>, Matthew Turvey <mcturvey@gmail.com>, Leif Halvard Silli <xn--mlform-iua@xn--mlform-iua.no>, Silvia Pfeiffer <silviapfeiffer1@gmail.com>, Laura Carlson <laura.lee.carlson@gmail.com>, Sam Ruby <rubys@intertwingly.net>, Paul Cotton <Paul.Cotton@microsoft.com>, Maciej Stachowiak <mjs@apple.com>, HTML WG <public-html@w3.org>
Hi Ben,
> Apple Accessibility API), AT could query the DOM to find the referenced
description elements and map them
> back to the accessibility tree.

my understanding is that VoicOver does not query the DOM directly at all,
by design.


On 2 February 2012 10:33, Benjamin Hawkes-Lewis <bhawkeslewis@googlemail.com
> wrote:

> [This conversation has derailed from discussing image maps]
> On Thu, Feb 2, 2012 at 4:44 AM, John Foliot <john@foliot.ca> wrote:
> > 2) You couldn't just park it off screen somewhere, or 'hide' if with
> > @hidden, and then link to it with aria-describedby because:
> >
> >        a) You would lose both the hyperlink to the actual speech, as well
> > as the semantic markup of <abbr>, as both would be flattened to string
> text.
> >
> >        b) If somehow you could overcome the flattening-to-string-text
> > problem, to activate the hyperlink you must put tab-focus on the link -
> how
> > do you focus on something that is hidden? And what of sighted users,
> > (perhaps using a tool such as ZoomText Magnifier/Reader, which is both a
> > screen reader and screen magnifier -
> > http://www.aisquared.com/zoomtext/more/zoomtext_magnifier_reader/) who
> would
> > hit the tab key and not see a visible tab focus (failing WCAG 2.4.7 Focus
> > Visible: Any keyboard operable user interface has a mode of operation
> where
> > the keyboard focus indicator is visible. (Level AA)) - what exactly
> should
> > the magnifier magnify?
> It's a good question, but again we need to ask the editors of ARIA
> this question as they are the ones who are saying UAs may and AT
> should provide a way to navigate to navigate to even hidden structured
> information.
> http://www.w3.org/WAI/PF/aria-implementation/#mapping_additional_relations_reverse_relations
>    http://www.w3.org/WAI/PF/aria-implementation/#include_elements
> One possible UI solution to this would be to generate some sort of
> popup display rendering the hidden content. You could even mimic
> precisely how JAWS uses @longdesc by rendering the hidden content into
> a data URI and then asking the browser to open the data URI in a new
> window. But it might be get a better UI by generating a lighter weight
> popup (think the popup displays used by Apple VoiceOver) based on the
> accessibility tree itself.
> >        c) The aria-describedby attribute (as well as its companion
> > aria-labelledby and aria-label attributes) are read as the Accessible
> Label
> > in the Accessibility API's - those APIs do not recognize the fact that
> this
> > is additional and supplemental information that should be served to the
> user
> > on demand - they instead supply their labeling/describing text as part of
> > the regular speech flow.
> This doesn't sound right to me? In the suggested API mappings,
> @aria-labelledby and @aria-label are mapped to accessible name and
> (where available) labelled by relations but @aria-describedby is
> mapped to accessible description and (where available) described by
> relations:
>    http://www.w3.org/WAI/PF/aria-implementation/#mapping_role_table
> APIs do not generate "speech flow". AT is free to use the semantics in
> the accessibility graph however they like, and additionally to query
> the DOM for more information. It may be (I haven't see any tests
> around this) that all text-to-speech ATs currently read the
> description property automatically, but that is not a limitation of
> the information available to AT. For example, where APIs support
> relations (iAccessible2, UI Automation, AT-SPI), AT can locate the
> description nodes in the accessibility tree. Where APIs that do not
> support relations (legacy MSAA and Apple Accessibility API), AT could
> query the DOM to find the referenced description elements and map them
> back to the accessibility tree. Having located the description nodes,
> rather than automatically reading the description property, AT could
> give the user the option of moving focus to those nodes.
> Querying the DOM is of course more cumbersome than using relations,
> but I believe this how JAWS supports @longdesc in IE.
> Gecko appears to provide special methods for opening long descriptions
> in nsHTMLImageAccessible:
> http://mxr.mozilla.org/mozilla-central/source/accessible/src/html/nsHTMLImageAccessible.h
> http://mxr.mozilla.org/mozilla-central/source/accessible/src/html/nsHTMLImageAccessible.cpp
> I'm not quite sure what these actions look like in the various
> accessibility APIs, but there's no fundamental reason that opening a
> data URI or some other sort of popup pulling together the elements
> referenced by @aria-describedby could not be exposed by the same
> mechanism. It would look the same to legacy AT.
> --
> Benjamin Hawkes-Lewis

with regards

Steve Faulkner
Technical Director - TPG

www.paciellogroup.com | www.HTML5accessibility.com |
HTML5: Techniques for providing useful text alternatives -
Web Accessibility Toolbar - www.paciellogroup.com/resources/wat-ie-about.html
Received on Thursday, 2 February 2012 10:40:05 UTC

This archive was generated by hypermail 2.4.0 : Saturday, 9 October 2021 18:45:47 UTC