RE: ISSUE 30 @longdesc use cases

Jonas Sicking 
> 
> I've never used AT tools so I can't answer more specifically than "The
> same way that the screen reader would jump to a @longdesc page, or
> jump to the part of the page pointed to by @aria-describedby".

In Screen readers that support @longdesc, the fact that a long description
is provided is announced to the user, but to access that description, the
user activates the link [enter] - it's a user-choice switch.

Using the sample code you sent me (and only testing in NVDA, as I am on
vacation and not in the office) the image is announced as "long
description here" (which is the text that is associated to the image using
aria-describedby) - there is no toggling mechanism, the user *must* listen
to the full text referenced as the 'describedby' text (I cannot speak for
other Screen Readers at this time, but believe it is the same result).
Thus if you then placed a 80 word paragraph directly below an image, and
used the aria-describedby mechanism to point to that paragraph, the screen
reader would read that paragraph as a directly associated part of the
image, and then move to the paragraph (which, if we remember, is directly
*after* the image), and read the paragraph out loud again. Yes, that's
right, *read it a second time*!

The 'hack' that authors use then, is to point the aria-describedby
attribute to a code block similar to: <a href="[path to a page somewhere
else]" style="margin-left:-999px; position: absolute;">Read the
transcript</a>, so that the Screen reader then arrives at the image, and
their screen reader announces "Link: Read the Transcript" (in effect
replicating EXACTLY what @longdesc does), at which point the screen reader
user can activate the link, or continue on to read the next block in the
page. Choice is good!

> 
> This mechanism doesn't need to be specific to AT users for what it's
> worth. A browser for seeing users can expose this description in a
> similar way that it exposes the link to @longdesc pages.

In screen readers that support @longdesc, when arriving at an image it
will announce something along the lines of: "image{pause}chart of user
statistics{pause}long description" (which is known to be a 'link', at
which point the user can then follow that link to the longer description,
or not). 

I am still unclear on the discovery mechanism(*) here for sighted users -
if the content is 'hidden' from the normal flow (but still in the DOM),
how does the sighted user a) know to go looking for it? look where? b)
access the content (i.e. change the authored state from hidden to not
hidden?) Where does the hither-to hidden content render on screen by
default? I can understand that the state might be changed via scripting,
but is there a native mechanism to otherwise discover the existence of
content covered by @hidden, and is there a native means of changing that
state available to end users?

(* discovery of 'hidden' stuff in web-pages has been a long-documented
accessibility issue - it is the same problem we have with @accesskeys -
how do we know when an author has created these items? With @longdesc
today, in Opera and to a lesser extent in Firefox we can intuit that
right-clicking over an image exposes a contextual menu, and this is where
the @longdesc link is offered - in Opera as an actual link and in Firefox
as the URL to the long description, but one must 'note' that URL to follow
the link - the direct linking doesn't exist. Once again, the mechanisms
themselves aren't flawed, it's the lack of support in the browsers)

SO: 

<img src="chart.png" alt="Short description of the image"
aria-describedby="chart-description">
<p hidden id="chart-description">
  A very long tract of text that goes into excruciating but important
detail about the sophisticated image placed on the page that is hidden for
sighted users using the @hidden mechanism but linked using
aria-describedby will be force-read aloud to the non-sighted user in total
whether or not they want to actually hear every nuance of the detailed
image in the page because there is no other native toggling mechanism
available today (except @longdesc).
</p>

...doesn't really solve the problem, and in fact creates a new ones -
information via fire-hose for the non-sighted, and a lack of native
exposure mechanism to sighted users that might benefit from the same long
text descriptions of the image.

> 
> However I'm not sure that the @longdesc description is very fruitful.
> I saw a trivial answer to one of the main concerns raised in the
> beginning of this thread.

OK, however it was you that introduced the topic...


> I don't expect that it will change the fact
> that a Formal Objection will be pursued or that the chair decision
> will be appealed. So I'll just let those processes carry on.

Fair enough, but as Maciej has noted to me off list, new data points are
new data points, and they should be brought to the list as they emerge. I
think we've successfully concluded that using @hidden is not a mechanism
to address the issue that @longdesc addresses, and in fact it cannot.

> 
> The discussion about @hidden is a separate one though, but I'll pursue
> that in a bug.
> 
> / Jonas

Received on Tuesday, 24 August 2010 02:01:18 UTC