W3C home > Mailing lists > Public > public-html@w3.org > February 2009

RE: Example canvas element use - accessibility concerns

From: John Foliot - WATS.ca <foliot@wats.ca>
Date: Fri, 20 Feb 2009 17:38:04 -0800
To: "'Ian Hickson'" <ian@hixie.ch>
Cc: "'HTML WG'" <public-html@w3.org>, "'W3C WAI-XTECH'" <wai-xtech@w3.org>
Message-ID: <06e201c993c5$0a8f3e90$1fadbbb0$@ca>
Ian Hickson wrote:
> >
> > *Real* user research suggests exactly the opposite:
> >
> > 	"In general, those with disabilities, those that use a screen
> > reader more, and those with higher screen reader proficiency all
> tended
> > to prefer the more brief alternative texts more than those with no
> > disabilities, less frequent use, and lower proficiency."
> The bit you're quoting here is about images that show logos, not about
> images showing detailed diagrams. The spec does in fact supports what
> you say for logos:
> There doesn't seem to be anything in the survey regarding how to handle
> images that convey complicated concepts in the form of diagrams not
> otherwise represented in the text.

Ian, yes, you are correct, my response was not 100% on target.  The initial
results from WebAIM's survey are terse... it was something at hand to be
used quickly.  My personal anecdotal evidence and experience in the field
suggests that this is true for *all* alt text: the majority of daily screen
reader users want concise and summary type data for the @alt value, and
if/when appropriate an expanded explanation made available (which you have
proven is rarely provided, and /or not done correctly with @longdesc - this
BTW does not negate the need for this type of functionality, in fact it
re-enforces it, but whether or not @longdesc or aria describedby [or both]
is the best way we cannot be sure of.  @longdesc now has native support in
current user-agents, whilst ARIA support is still rolling out).  

You were right in your example in illustrating the kind of good, useful
textual alternative to an image that should exist, especially in a complex
image; your error was in the method of delivery to the end users... not
every screen reader user might want to hear that much detail at first
pass... Imagine '...a glance at the image' vs. '...a close study of the
image' and you kind of get the idea.  We need to actually provide both

The point here however is that in an effort to provide good guidance on
ensuring accessibility, the initial recommendation was not actually based
upon factual research, but rather a skewed belief that this is what users
need/want.  Many of the other contentious accessibility friction points all
can trace their roots back to this fundamental start point.

> This is awesome data, thanks for the link. I'll make sure to study this
> carefully and see if anything in the spec should be updated based on
> it.

I have BCC's Jared Smith on this note in case he does not see this thread,
as apparently he has more raw data that requires further analysis, but as
for all of us, bandwidth is always finite.  However, perhaps some offline
discussion might prove fruitful.  Jared?

Received on Saturday, 21 February 2009 01:38:48 UTC

This archive was generated by hypermail 2.3.1 : Thursday, 29 October 2015 10:15:42 UTC