Scoring Example User Agents, etc.

To: UA List
From: Eric Hansen
Re: Scoring Example User Agents

Suggestion 1: Determine whether there needs to be any adjustment in
checkpoints due to the new focus on "mainstream graphical user agents".

The 1 September 2000 draft focuses in on "mainstream graphical user agents"
and I presume that this excludes user agents that a specifically or
exclusively for people with disabilities. This seems consistent with our
earlier discussion of focusing on "general purpose user agents" and with the
even earlier decision to focus on "general purpose graphical user agents".
However, as I have said before, there does seem to be a bit of a mismatch
between the checkpoints and this scope. For example, as currently written a
user agent could achieve the triple-A level, yet not even be able to present
graphics. Specifically, there are no requirements that the user agent be
able to present any given media type. 

This still seems a bit unusual, especially given that the current definition
of user agent ("A user agent is software that retrieves and renders Web
content, including text, graphics, sounds, video, images, and other content
types."), though I suppose that a more accurate definition of user agent
would replace the word "including" with the words "such as".)

My own concern about this mismatch waxes and wanes. Previous suggestions to
add checkpoints requiring an ability to present certain media types have not
been implemented. I mention it now because we may find that by making
changes that I suggested earlier, we may help solve other problems as well.

========

Suggestion 2: Gather data to strengthen our understanding of some key
questions related to assistive technologies, plug-ins, and add-ons, etc.

The 1 September 2000 draft moves in the right direction by clarifying the
relationship between the UAAG document and assistive technologies:

"3.2 Which user agents are expected to conform"

"Users with disabilities often require more than one user agent for full
access to the Web. For example, a user might require a graphical desktop
browser, a multimedia player, and specialized assistive technologies such as
screen readers, which are useful for controlling speech output and
refreshable braille display. This document focuses on the accessibility of
mainstream user agents so that most users with disabilities will have access
to the Web when using a conforming user agent in conjunction with assistive
technologies. There are also requirements in this document to make user
agents more accessible to those users with disabilities who do not require
assistive technologies for full access."

This paragraph is helpful and seems to reflect pretty well our current state
of knowledge. Yet there are ambiguities. The second and first sentences
imply that a combination of technologies -- "graphical desktop browser, a
multimedia player, and specialized assistive technologies" are necessary to
provide "full access". Yet the third sentence seems to indicate that the
"conforming user agent" was treated as a singular user agent rather than as
a composite (or combined) user agent. I think that we ourselves need to
understand better how our document behaves when used under different,
commonly occurring situations.

The need for further clarification was highlighted in my own case when I
realized how unsure I was about the answers to some of the following
questions.  

a. No DOM Support in Component of a Composite User Agent

True or False: "It is possible for a user agent that does not implement the
DOM to be a component of a composite user agent that achieves any of the
three levels of claim (single-A, double-A, and triple-A)."

Answer: True (?). I thought that there was going to be some indication that
checkpoints 5.1, 5.2, etc., _cannot_ be considered inapplicable but I don't
see them so I give "True" as the correct answer.  I have not examined the
document fully, but I don't see such a provision. 

====

b. No DOM Support in Singular User Agent

True or False: "It is possible for a singular (non-composite) user agent to
not implement the DOM and yet achieve any of the three levels of claim
(single-A, double-A, and triple-A)."

Answer: True (?). See answer for "a." 
====

c. Text Reader Within Composite User Agent

True or False: "It is possible for a text reader (e.g., screen reader) user
agent to be a component of a user agent that achieves any of the three
levels of claim (single-A, double-A, and triple-A)."

Answer: True (?). 

====

d. Text Reader in Singular User Agent

True or False: "It is possible for a text reader (e.g., screen reader) user
agent to be a singular (non-composite) user agent that achieves any of the
three levels of claim (single-A, double-A, and triple-A)."

Answer: True (?), though based on the focus on mainstream graphical user
agents it would not make sense to analyze a singular user agent of this
type. 

====

e. Braille Within Composite User Agent

True or False: "It is possible for a braille display user agent to be a
component of a user agent that achieves any of the three levels of claim
(single-A, double-A, and triple-A)."

Answer: True (?).

====

f. Braille Singular User Agent

True or False: "It is possible for a braille display user agent to be a
singular (non-composite) user agent that achieves any of the three levels of
claim (single-A, double-A, and triple-A)."

Answer: True (?), though based on the focus on mainstream graphical user
agents it would not make sense to analyze a singular user agent of this
type.

====

g. Advantages and Disadvantages

Open Ended: "Describe the advantages and disadvantages for the developer of
a 'mainstream graphical user agent' analyzing their user agent as part of a
composite user agent that includes one or more of the following: multimedia
player, Braille display, and screen reader package."

Answer: Not sure. I think that one thing that must be kept in mind by
someone answering this question is that the subject of each checkpoint is
the subject of the claim. This means, for example, that if a composite user
agent includes a Braille display, then the "user interface" in checkpoints
such as 1.1, 1.3, 1.5, 2.1, 5.2, 5.4, 5.5, 5.8, 8.7, 10.9, should refer to
the interface of the Braille display as well as to whatever may be presented
in other display devices. 

Conclusion:

I would suggest asking one or more people to score the following user
agents. If only a few of the following could be done, I might suggest
focusing on numbers 1, 5, 8, and 11. I think that all these cases would be
helpful.

1. Singular user agent = mainstream graphical user agent
2. Singular user agent = multimedia player
3. Singular user agent = screen reader package
4. Singular user agent = Braille display
5. Composite user agent = mainstream graphical user agent + multimedia
player 
6. Composite user agent = mainstream graphical user agent + screen reader
package
7. Composite user agent = mainstream graphical user agent + Braille display
8. Composite user agent = mainstream graphical user agent + multimedia
player + screen reader package 
9. Composite user agent = mainstream graphical user agent + multimedia
player + Braille display
10. Composite user agent = mainstream graphical user agent + screen reader
package + Braille display
11. Composite user agent = mainstream graphical user agent + multimedia
player + screen reader package + Braille display
12. Singular user agent = text-only browser
14. Singular user agent = audio-only (telephone) browser
15. Singular user agent = audio-and-text (telephone) browser
*
I think that by examining the claims that would result from these various
cases, we would be better able to guide people in including or excluding
user agents from the analyses.

The more of these cases we can examine, the better we will be able to
eliminate bugs and other problems that might now exist in the document.

Without this kind of analysis, I am not sure that we can properly judge the
impact of our recent decisions to (a) allow composite user agents and (b)
focus more narrowly on 'mainstream graphical user agents'. 

Examples of results that would be of particular interest might include:
unexpected increases or decreases in conformance when some kind of component
is added; unusually high or low numbers of inapplicable checkpoints; higher
accessibility ratings for non-graphical or specialized user agents than for
mainstream graphical user agents, etc. 

Even if such results caused no changes to checkpoints, they might result in
changes how we tell people to make the best use of the UAAG document.

It seems to me that we need not only (a) example implementations of
individual checkpoints but also (b) example scorings of user agents
(singular and composite) that people may want to try to use the guidelines
to analyze.

Does this make any sense?
===========================
Eric G. Hansen, Ph.D.
Development Scientist
Educational Testing Service
ETS 12-R
Princeton, NJ 08541
609-734-5615 (Voice)
E-mail: ehansen@ets.org
(W) 609-734-5615 (Voice)
FAX 609-734-1090

Received on Tuesday, 5 September 2000 15:17:51 UTC