Re: Some comments on conformance levels in UA guidelines draft

peter.b.l.meijer@philips.com wrote:
> First I wish to compliment the people who defined the
> User Agent Accessibility Guidelines Working Draft with
> their excellent work. A few comments nevertheless, and
> I apologize in advance in case my comments have perhaps
> already been covered by earlier discussions. Note also
> that below I will implicitly be referring only to user
> agent accessibility issues for blind users.

Thank you for the comments, Peter. My short answer to
your comments is that conformance to the Guidelines is meant
for one user agent at a time.  UA conforms on its own, not in 
conjunction with other software. From section 1.6 on conformance:

         "User agents must satisfy natively all the applicable 
                                   ^^^^^^^^
         checkpoints for a chosen conformance level."

[1] http://www.w3.org/TR/1999/WD-WAI-USERAGENT-19991105/#conformance

Of course, for marketing purposes, companies can vaunt how tools
used in combination satisfy all of the checkpoints. While accessible
and useful, such a claim would not meet the conformance 
requirements of this version of the User Agent Guidelines. 
For some background about this discussion and decision,
please refer to this summary [2] of conformance issues.

The conformance statement of this document has been designed for
general purpose user agents, not specialized tools. To avoid
the dependencies you describe below (e.g., works with one
screen reader but not with another), we decided that conformance
would not include tools used in combination.

Let me know if this clarifies the issue you raised, and thanks
again for your review.

 - Ian

[2] http://lists.w3.org/Archives/Public/w3c-wai-ua/1999JulSep/0433.html
 
> As the developer of a user agent that is meant to make 
> pure web graphics more accessible to totally blind users
> (see the URL below), I foresee several rather fundamental
> problems in defining the conformance levels, such as "A", 
> "Double-A" and "Triple-A", and in using these to gauge
> whether or not my agent complies with these conformance 
> levels.
> 
> My user agent in practice needs both the basic operating
> system features and a "good" screen reader to be "fully
> accessible" to blind users. Now some of the necessary 
> accessibility features will be provided by the operating 
> system, some others by the run-of-the-mill screen reader, 
> and still others only by particular brands or versions of 
> screen readers, but how far can one stretch this? A vendor
> might say that his or her unique owner-drawn graphical items
> are readable to blind users if they use third-party OCR 
> product X to process a screen capture for his dialog. A 
> crude way of working for sure, and hardly user-friendly, 
> but it might indeed work according to the rules set by the 
> guidelines: no off-screen model is required or involved 
> here, and the user can access the information if the right
> combination of accessibility tools is bought and used - 
> which can also get expensive beyond reason.
> 
> The current consensus seems to be that requiring a screen
> reader is acceptable, but at some point this does not make
> sense anymore. What if one of my user agent's features is 
> accessible via screen reader "X" but not via screen reader
> "Y", or vice versa. Do I qualify for an "AAA" rating if at 
> least one screen reader exists that does the trick? What 
> if my feature "a" is handled well by screen reader "X", 
> but not by screen reader "Y", while feature "b" is handled
> well by "Y" but not by "X"? Can I still get away with it by
> just saying that the user should buy both screen readers?
> That doesn't "feel" right, of course, but it is unclear to
> me where it ends, or where the borderlines are exactly.
> 
> Somehow the minimum level of accessibility support provided
> by a "good" screen reader (in conjunction with the underlying
> operating system) has to be carefully defined first. Without 
> that definition of a "reference" screen reader, any accessibility
> rating for a user agent relying on what a third party screen 
> reader "might" do carries rather little weight, it seems.
> 
> Just a simple example: the standard progress bar in Microsoft
> Windows. I was recently surprised to find that at least one 
> well-known screen reader could not read it out (this was then
> confirmed by a blind power user of that particular screen 
> reader). The progress bar is certainly displayed as purely 
> graphical, but the API system call used by the application 
> programmer (me) provides the position and range numbers that 
> should suffice to make its reading readily accessible through
> a screen reader. So there is no technical reason why the 
> progress bar would not be accessible, even though it appears
> as purely graphical on the screen. Now step 1 should be that
> the operating system indeed makes the progress bar numbers
> from the API call available to screen readers, for instance 
> via its "off-screen model". Step 2 is that the screen reader
> also uses these numbers to render them into an accessible 
> representation (e.g., speaking something like "progress 35 
> percent" for the default 0 to 100 scale). Step 3 seems to be
> that if the screen reader for some reason does not do this 
> that the user agent does it instead? Is step 3 necessary if 
> at least one screen reader on the market covers it? How does
> this affect "my" user agent rating? Right now there are too
> many ill-defined dependencies to make the conformance level
> or a derived rating work for me.
> 
> Please note that I do understand and appreciate the working
> groups' encouragement of using only standard operating 
> system APIs whenever possible, because that indeed gives 
> the best chances that most third party screen readers will 
> be able to deal with it. I attempt to practice these 
> guidelines myself and in case of doubt try to test things
> with one or two screen readers, while end users give me 
> feedback if I inadvertently missed something. However, 
> "branding" a user agent with respect to conformance with 
> the guidelines requires far more strict definitions of what
> that conformance entails, because user agent developers 
> cannot possibly test their user agent with every screen 
> reader (and versions of that) released on the market. 
> 
> I'd think the accessibility topic should be covered in two
> stages. A clear definition of what an imaginary "reference" 
> screen reader would do is required first. Does the reference
> screen reader speak standard menu items? Yes. Buttons? Yes. 
> Does it speak standard progress bars? I would hope so. So 
> we first need a checklist that defines and measures a 
> conformance level for screen readers, before we define and 
> measure a conformance level of user agents that in practice
> often (have to) run "on top of" the combination of operating
> system and screen reader.
> 
> Perhaps the use (thus risking abuse) of the accessibility
> guidelines conformance levels to brand products should be 
> strongly discouraged or even forbidden for the time being 
> through some explicit working group statements in these
> guidelines?
> 
> Best wishes,
> 
> Peter Meijer
> 
> 
> The vOICe Internet Sonification Browser
> http://ourworld.compuserve.com/homepages/Peter_Meijer/eyebrows.htm
> 


-- 
Ian Jacobs (jacobs@w3.org)   http://www.w3.org/People/Jacobs
Tel/Fax:                     +1 212 684-1814
Cell:                        +1 917 450-8783

Received on Wednesday, 1 December 1999 09:38:34 UTC