W3C home > Mailing lists > Public > w3c-wai-ua@w3.org > October to December 1999

follow up on Wed 27th UA action item

From: mark novak <menovak@facstaff.wisc.edu>
Date: Fri, 29 Oct 1999 12:38:41 -0500
Message-Id: <v01540b06b43f7a6249b6@[128.104.23.196]>
To: ij@w3.org
Cc: w3c-wai-ua@w3.org
hi Ian, et. al.

per my action item:

<snip>

   10.MN: Repropose wording for Ian's proposed Checkpoint 1.5 described in:
      http://lists.w3.org/Archives/Public/w3c-wai-ua/1999OctDec/0157.html

which was (not sure if this still is?) :

1.5 Ensure that information output as part of operating the user agent is
available
through ouput device APIs implemented by the user agent. [Priority 1]
       For instance, users must be able to operate the user agent without
       relying on two-dimensional graphical output, cursor position, etc.
       User agents must ensure that information about how much of a page
       or video clip has been viewed is available through output device APIs.
       Proportional navigation bars may provide this information visually,
       but the information must be available to users relying on
       synthesized speech or braille output.

</snip>

As I listened and thought about the discussion, there seemed to be two parts.

1 - why have a checkpoint to tell people to use the standard output APIs (which
has sub-points about how best to do so, which I think are techniques, such as
the examples given in the "For instance....." above)

2- what is required if UAs provide redundant output or maybe even should
UAs provide redundant output ?

Adressesing each as follows:

Item #1:

I think it is still critical that we (the UA group and guidelines) have a
checkpoint
for using standard output APIs, just as we do for standard input APIs.  Just as
developers can and have learned to get around the standard keyboard event queue,
they can and have learned to get around the standard output when drawing text
(and graphics) to a video screen, for example.

While not the best word smith'er, I think checkpoint 1.5 needs to state
something
like:

"Ensure the UA makes use of the standard platform APIs to render
information (e.g., text,
graphics, etc.) to the standard platform output device (e.g., for video,
the screen, etc. ).  The
UA should not, for reasons of speed, efficiency, etc., bypass the standard
platform
APIs to render information.  For example, a UA should not directly
manipulate the
memory associated with information being rendered, because screen review
utilities
would not be capable of monitoring this activity from the platform APIs."

This needs to be a P1 as Ian had it.

If people want to dig into this deeper, please review the Microsoft
guidelines for
accessible software design on their web site at:

http://www.microsoft.com/enable/

Check out Section 5, Exposing Screen Elements, Drawing to the screen.



Item #2

The area of output redundancy, or multiple output modalities, is perhaps a
newer concept.  Granted, multi-media capable computers have been around
for quite some time, but I don't think we have quite reached where we need
this technology to get to.

However, I think it gives UAs something to shoot for, and does move the
guidelines in the right direction if we add to 1.5, or perhaps add a separate
checkpoint about the UAs responsibililty, should that UA decide to provide
multiple output modalities.  If you again look at many other guidelines
that already exist, you will see references to supporting the users' choice
for output methods.  I also think we'd agree that a UA that did so, would be
a much more flexible, and perhaps more powerful UA.  The concern which I
think I heard during the teleconference call, was "requiring the UA"
to provide multiple output modalities.  I don't think that was the intention
of the discussion, but I do think that the intention was, "if the UA provides
output using multiple modalities, it must render the same information
via each output modality".  In other words, if I'm a UA that natively
does both visual and aural output, I need to render everything aurally
that I render visually.  My comment was, "I didn't see much value in
a self-voicing browser that didn't speak it's menus".  Perhaps a simple
example, but I hope the point is made.


Would a checkpoint along the lines of :

"if a UA renders information using multiple output modalities, it must
render the
same information via each output modality that it natively supports".

be technically difficult to do?  I think so.    Could UAs ever meet this
requirement,
I also think that they can and will.

(sorry for the length of this reply)

Mark
Received on Friday, 29 October 1999 13:36:36 GMT

This archive was generated by hypermail 2.2.0+W3C-0.50 : Tuesday, 27 October 2009 06:49:34 GMT