W3C home > Mailing lists > Public > w3c-wai-ua@w3.org > October to December 1999

Re: follow up on Wed 27th UA action item

From: mark novak <menovak@facstaff.wisc.edu>
Date: Sat, 30 Oct 1999 23:31:06 -0500
Message-Id: <v01540b03b441726004eb@[128.104.23.196]>
To: <thatch@us.ibm.com>
Cc: w3c-wai-ua@w3.org
see comment at MN:

At 9:13 PM 10/30/99, <thatch@us.ibm.com> wrote:
>I like the clarity of the two items you have come up with, Mark.
>
>Quote: Ensure the UA makes use of the standard platform APIs to
>render information (e.g., text, graphics, etc.) to the standard
>platform  output device (e.g., for video, the screen, etc.). The
>UA should not, for reasons of speed, efficiency, etc., bypass
>the standard platform APIs to render information.  For example,
>a UA should not directly manipulate the memory associated with
>information being rendered, because screen review utilities
>would not be capable of monitoring this activity from the
>platform APIs. (P1) Endquote.
>
>As you said, this is standard software accessibility requirement; I
>especially like your wording.
>
>Quote: if a UA renders information using multiple output modalities,
>it must render the same information via each output modality that
>it natively supports. Endquote.
>
>I wonder how one decides that a modality is really serious. My browser
>makes noises (!) uses the sound system but it doesn't fall under this
>quideline. You mentioned lacking self-voicing menus in a speech
>browser as failing this. But what if menus talked, but (real) table
>structure
>was lost?  I suggest that "the same" above needs to be replaced with
>"equiuvalent."

MN:  thanks Jim, fine with me to change "the same"  to "equivalent", such
that this
second proposed checkpoint would read...

if a UA renders information using multiple output modalities,
it must render equivalent information via each output modality that
it natively supports.


>
>Jim Thatcher
>IBM Special Needs Systems
>www.ibm.com/sns
>HPR Documentation page: http://www.austin.ibm.com/sns/hprdoc.html
>thatch@us.ibm.com
>(512)838-0432
>
>
>menovak@facstaff.wisc.edu (mark novak) on 10/29/99 12:38:41 PM
>
>To:   ij@w3.org
>cc:   w3c-wai-ua@w3.org
>Subject:  follow up on Wed 27th UA action item
>
>
>
>
>hi Ian, et. al.
>
>per my action item:
>
><snip>
>
>   10.MN: Repropose wording for Ian's proposed Checkpoint 1.5 described in:
>      http://lists.w3.org/Archives/Public/w3c-wai-ua/1999OctDec/0157.html
>
>which was (not sure if this still is?) :
>
>1.5 Ensure that information output as part of operating the user agent is
>available
>through ouput device APIs implemented by the user agent. [Priority 1]
>       For instance, users must be able to operate the user agent without
>       relying on two-dimensional graphical output, cursor position, etc.
>       User agents must ensure that information about how much of a page
>       or video clip has been viewed is available through output device
>APIs.
>       Proportional navigation bars may provide this information visually,
>       but the information must be available to users relying on
>       synthesized speech or braille output.
>
></snip>
>
>As I listened and thought about the discussion, there seemed to be two
>parts.
>
>1 - why have a checkpoint to tell people to use the standard output APIs
>(which
>has sub-points about how best to do so, which I think are techniques, such
>as
>the examples given in the "For instance....." above)
>
>2- what is required if UAs provide redundant output or maybe even should
>UAs provide redundant output ?
>
>Adressesing each as follows:
>
>Item #1:
>
>I think it is still critical that we (the UA group and guidelines) have a
>checkpoint
>for using standard output APIs, just as we do for standard input APIs.
>Just as
>developers can and have learned to get around the standard keyboard event
>queue,
>they can and have learned to get around the standard output when drawing
>text
>(and graphics) to a video screen, for example.
>
>While not the best word smith'er, I think checkpoint 1.5 needs to state
>something
>like:
>
>"Ensure the UA makes use of the standard platform APIs to render
>information (e.g., text,
>graphics, etc.) to the standard platform output device (e.g., for video,
>the screen, etc. ).  The
>UA should not, for reasons of speed, efficiency, etc., bypass the standard
>platform
>APIs to render information.  For example, a UA should not directly
>manipulate the
>memory associated with information being rendered, because screen review
>utilities
>would not be capable of monitoring this activity from the platform APIs."
>
>This needs to be a P1 as Ian had it.
>
>If people want to dig into this deeper, please review the Microsoft
>guidelines for
>accessible software design on their web site at:
>
>http://www.microsoft.com/enable/
>
>Check out Section 5, Exposing Screen Elements, Drawing to the screen.
>
>
>
>Item #2
>
>The area of output redundancy, or multiple output modalities, is perhaps a
>newer concept.  Granted, multi-media capable computers have been around
>for quite some time, but I don't think we have quite reached where we need
>this technology to get to.
>
>However, I think it gives UAs something to shoot for, and does move the
>guidelines in the right direction if we add to 1.5, or perhaps add a
>separate
>checkpoint about the UAs responsibililty, should that UA decide to provide
>multiple output modalities.  If you again look at many other guidelines
>that already exist, you will see references to supporting the users' choice
>for output methods.  I also think we'd agree that a UA that did so, would
>be
>a much more flexible, and perhaps more powerful UA.  The concern which I
>think I heard during the teleconference call, was "requiring the UA"
>to provide multiple output modalities.  I don't think that was the
>intention
>of the discussion, but I do think that the intention was, "if the UA
>provides
>output using multiple modalities, it must render the same information
>via each output modality".  In other words, if I'm a UA that natively
>does both visual and aural output, I need to render everything aurally
>that I render visually.  My comment was, "I didn't see much value in
>a self-voicing browser that didn't speak it's menus".  Perhaps a simple
>example, but I hope the point is made.
>
>
>Would a checkpoint along the lines of :
>
>"if a UA renders information using multiple output modalities, it must
>render the
>same information via each output modality that it natively supports".
>
>be technically difficult to do?  I think so.    Could UAs ever meet this
>requirement,
>I also think that they can and will.
>
>(sorry for the length of this reply)
>
>Mark
Received on Sunday, 31 October 1999 00:28:56 UTC

This archive was generated by hypermail 2.3.1 : Wednesday, 7 January 2015 14:49:24 UTC