W3C home > Mailing lists > Public > w3c-wai-gl@w3.org > July to September 2001

Re: Checkpoint 3.4 again

From: Charles McCathieNevile <charles@w3.org>
Date: Sun, 29 Jul 2001 00:06:55 -0400 (EDT)
To: Marti McCuller <marti@agassa.com>
cc: WAI GL <w3c-wai-gl@w3.org>
Message-ID: <Pine.LNX.4.30.0107282357470.5545-100000@tux.w3.org>
Sure we expect people who are relying on images to have their own
applications to help. For example, image-rendering software and hardware (the
logical equivalent to text-rendering software and hardware). If there were
techniques by which we could assume that some automatic translation was
available, we might be able to expect that they would use those, just as we
would do if we were able to provide image-interpretation software, and as we
might do in order to interpret sound if the software/hardware that processed
that were more readily available and worked better.

Unfortunately we cannot automatically interpret most multimedia or most text
without some work from a human, so for now we require various kinds of
alternatives to be supplied. However it doesn't seem that we are trying to do
more for one group than for another - we are trying to ensure that all people
can have the same level of experience as far as possible.

Some people express themselves well in text, but not at all in images or
sound. SOme express things in sound, but not in text or image. Some can draw
images but cannot write clearly at all. Many (myself included) are not
particularly good at any of these, and without some support from their
authoring environments will do a mediocre (at best) job at them all. But that
does not change the fact that some people cannot tell what an image is trying
to convey, or a piece of music, or a slab of long words, and that we ought to
make it clear that any of these things alone will present barriers to people,
and find techniques to remove those barriers.

Charles McCN

On Sat, 28 Jul 2001, Marti McCuller wrote:

  It seems to me that we are asking the web to do more in this case for the
  "learning-disabled" (or dyslexic etc) than in other cases.  Text equivalents
  must still be translated to a usable form on the user/client end (audio,
  Braille, etc.) In the case of Visually Impaired users we expect them to have
  appropriate applications to do their part.  An alternate to "text' in the
  form of sound is not needed because the software is available to make the
  translation. Can't we reasonably expect the learning-disabled to provide
  some of their own "translation"?
  Marti
Received on Sunday, 29 July 2001 00:06:59 GMT

This archive was generated by hypermail 2.2.0+W3C-0.50 : Monday, 7 December 2009 10:47:11 GMT