RE: aria-describedat

Hi folks,

1. I would love to see an attribute used for the general population that
would also benefit persons with disabilities. We at Learning Ally (was RFB)
did a description of a coral reef with an octopus. The students in the 2nd
class said, "Oh, is that is what is in that picture." In other words, they
did not see the camouflaged octopus in the picture and once it was brought
to their attention, the meaning became clear. This was in a 2nd grade
textbook. 
I talked to college students, who said, "I wish I had an expert in physics
describing the essential points of the image I am looking at, because it is
not as obvious as the textbook author may think."

2. If it were a benefit to the mainstream, it might get used more, i.e. it
is an enhancement that just makes for better content. Untrained eyes need a
little extra help in understanding the meaning and purpose of the image in a
particular context.

Best
George



-----Original Message-----
From: Geoff Freed [mailto:geoff_freed@wgbh.org] 
Sent: Thursday, March 29, 2012 5:40 PM
To: John Foliot; Silvia Pfeiffer
Cc: david bolter; Leif Halvard Silli; Charles McCathieNevile; Benjamin
Hawkes-Lewis; Richard Schwerdtfeger; faulkner.steve@gmail.com;
jbrewer@w3.org; George Kerscher; laura.lee.carlson@gmail.com; mike@w3.org;
public-html-a11y@w3.org; w3c-wai-pf@w3.org; W3C WAI-XTECH
Subject: RE: aria-describedat


________________________________________
From: John Foliot [john@foliot.ca]
Sent: Thursday, March 29, 2012 5:36 PM
To: Silvia Pfeiffer
Cc: david bolter; Leif Halvard Silli; Charles McCathieNevile; Benjamin
Hawkes-Lewis; Richard Schwerdtfeger; faulkner.steve@gmail.com;
jbrewer@w3.org; George Kerscher; laura.lee.carlson@gmail.com; mike@w3.org;
public-html-a11y@w3.org; w3c-wai-pf@w3.org; W3C WAI-XTECH
Subject: Re: aria-describedat

Quoting Silvia Pfeiffer <silviapfeiffer1@gmail.com>:

>> Not MetaData, real, human-readable textual data that describes in more
>> detail what the *foo* is that it is attached to.
>
> Metadata = data about data.
> long description = a long description (i.e. data) about the element
> (i.e. data)

Hair-splitter <grin>. For many garden-variety web authors, metadata
has a "special" connotation of <meta name="keywords" content="try,
fool, search, engines, stuffing, hokum">, so I would ask we avoid
adding any additional confusion and just not refer to longer textual
descriptions as metadata.



>> Minor correction here: JAWS *has* introduced a new interaction for
>> @longdesc. When JAWS encounters the @longdesc attribute in an <img>, it
>> announces the @alt text and then states: "Press ALT plus Enter for Long
>> Description" - and then pauses waiting for the user to tab (continue) or
hit
>> enter (explore).
>
> You mean: hit alt-enter?

Alt+Enter (simultaneously).

GF:
What's really interesting to me is that in Firefox, you only have to press
Enter even though JAWS announces "press Alt+Enter".


> In any case: this is an interaction that the screenreader creates and
> not one that the browser creates.

Exactly, which is the key difference between JAWS and NVDA w.r.t.
@longdesc: NVDA does not want to be in the position of defining a
user-interaction, but rather map to a pre-defined interaction 'native'
to the browser.


> That's the key difference.
> I guess, we could ask if browsers would agree to using alt-enter as
> the recommended interaction for the new attribute.

I think that *might* be one of a few possible strategies, but I would
caution recommending a single solution, and rather allow for
user-agents to develop appropriate contextual strategies. 

GF:
I'd go so far as to say it might be a waste of time to try to convince user
agents (that is, screen readers and/or browsers) to unify here.  Screen
readers already use wildly different key combinations (consider the
key-mapping differences between VoiceOver and any other screen reader, for
example).  I seriously doubt if alignment is in the future, nor would I push
for it.  Getting support and operability for long descriptions from both
user agents is more the goal, I think.



On the
Desktop, I think the contextual menu is a working and a workable
solution for many sighted users (Mouse right-click, or for keyboard
users Shift+F10 >> Tab to "longdescy thing" >> Enter), and/but a
screen reader could map that interaction pattern to a custom keyboard
control (such as Alt+Enter). I think that the mobile experience might,
by necessity be very different however, as the traditional
mouse/keyboard affordances simply are not there.

It would be extremely useful non-the-less if all browsers followed a
general interaction pattern for inter-op benefits.  I note as well
that this does not address the discoverability issue, only the
interaction issue.

JF

Received on Saturday, 31 March 2012 21:31:21 UTC