Re: Back to Principle 1

I like where Marti is going with this.  

Saying that you have to provide alternatives to content which is targeted
by default to visual or auditory presentation presumes that this content is
exchanged in a format which is only usable in that sensory mode.  This is
common today with data formats such as AU and GIF.  But it is not the root
logic of the principle.  A MIDI file is symbolic.  You can get a score from
a MIDI file by a stylesheet.  You can get sound from a MIDI file by a MIDI
player.  So text is not the only kind of content that is symbolic and
presentable in different ways to different senses.  A visual presentation
can arise from a video format that deals at the pixel level or it can be
generated in the client from a structured model of the world being viewed
as with VRML or animated SVG.  The formats vary in the degree in which they
are usable with different sensory modes.

As Marti says, the root principle is to provide the user with options.
Whether the options are realized as optional data or as optional processing
is a choice that should be left to the information supplier.

When you put a word in a GIF you disable the "optional processing" path to
sound that text to speech engines provide.   As a result you have to
backfill with the "optional data" of an ALT attribute.  This is how this
principle gives rise to the more concrete requirements in the current WCAG


At 10:58 AM 2000-07-16 -0400, Marti wrote:
>After a couple of days to think about it I find I am still bothered by
>Principle 1.
>" Provide alternatives to auditory and visual presentations"
>Setting the guidelines under it aside for the moment, it doesn't take much
>of a stretch of the imagination to interpret this to mean I need to provide
>a sound track of some sort to read all pages aloud and perhaps the score for
>any music I might put on a page.  Given the propensity to interpret
>accessiblity requirements in the worst possible light ( e.g. the still
>somewhat widespread belief that it means getting rid of all graphics and
>color) this really worries me.
>Greg's suggestion about "sensory modality" was good but leads us back to the
>problem of needing to interpret the language (say that again in English
>Perhaps we could state the Principle as
> Provide for alternative modes of presentation
>This puts the actual presentation mode in the hands of the User Agent or
>Assistive Technology while requiring the information be provided to support
>any transformations.  Thus we can speculate about a tool for the deaf that
>transforms music to a visual representation of the sound, or other tools not
>yet imagined.
>And speaking of speculation ...
>When the guidelines under principle 1 are added, I have a few additional
>In particular the phrase "Until user agents can".  I know problems with this
>phrase have been discussed before but I don't recall any resolution. There
>is, of course, the problem of figuring out exactly when this condition has
>been satisfied, and I am left wondering why, if we speculate about user
>agents doing something, not just make it a wish list of all the wonderful
>things we think they should do in the future?
>Perhaps we should eliminate the futuristic speculation and make the
>guidelines relevant to technology readily available as of a given date?
>Future updates to the guidelines could then consider each guideline in light
>of the then readily available technology and update accordingly.  This
>method would then allow for implementations that take best advantage of
>technology to meet the principles without actually violating the guidelines
>where conflicts exist, as they inevitably will.

Received on Sunday, 16 July 2000 11:46:05 UTC