Re: BIG ISSUES REVISED 9-20-01 (SEE BOTTOM OF NOTE)

In my role as co-chair, I shall here summarize the arguments
which have been made, in the past, with respect to some of the
outstanding "big issues" on Gregg's list. This should enable us to
proceed without retracing old ground too much.
Gregg Vanderheiden writes:
 > 
 >                   BIG ISSUES LIST   9-20-01
 > 
 > 6. Implementation
 > (Should difficulty in implementation affect priority.)
When we last considered this issue, the following arguments were
raised against taking difficulty of implementation into account:

a. Difficulty of implementation depends very much on circumstances
(what is difficult for one person, or using one technology, may be
comparatively easy in other contexts). Correcting web content that has
already been written, may or may not be more difficult than writing
new content according to the guidelines. Difficulty also varies with
the resources available to the developer (time, expertise, etc.),
which can vary significantly.

b. Difficulty is a relative concept for which there are no agreed
measures or criteria.

c. The guidelines are not regulations; it is the purpose of laws to
take difficulty and concepts of "unjustifiable hardship" into account,
but it would be inappropriate to do so in a technical Recommendation.

d. Any judgments about difficulty made in the guidelines are likely to
be overridden, in effect, by the legal requirements imposed by
anti-discrimination law, which vary from one jurisdiction to another.
Faced with a conflict, developers are likely to turn to the legal
requirements rather than the guidelines in determining what is
reasonable.

e. This working group doesn't have the expertise to make judgments
about difficulty.

I don't recall any strong arguments supporting the proposition that
difficulty should be taken into account in defining WCAG 2.0
priorities. Some have criticized the WCAG 1.0 conformance scheme on
the ground that it creates unwelcome incentives by admitting only
three levels of conformance corresponding to the three priority levels
of the guidelines. The only other comment regarding difficulty of
implementation which I recall, is Len's suggestion that it be clearly
separated out from the guidelines and the priority scheme, if it is to
be mentioned at all.
 > 
 > 9. Access for absolutely all?    - If not, how to draw line
 > (one suggestion was  "BEST EFFORT")
Another issue often discussed in this connection is the relevance of
the author's intended audience in determining where the demarcation
should lie. Of course, as has frequently been pointed out, it is not
legitimate for a developer to define the intended audience by
explicitly excluding persons with disabilities, as this would defeat
the entire purpose of the guidelines. The question remains, however,
of the role of the intended audience in determining what measures a
developer should take, especially under guideline 3, and how this
should be explained in the document.

It is also a consensus statement that we should provide whatever
techniques are available and appropriate, to enable developers to
implement a broad range of access solutions.
 > 
 > 10. Guidelines for all sites vs. special sites
I thought we decided that these guidelines were intended to cover all
web sites, but that (as a later project) it would be appropriate to
write documents explaining how to customize sites for particular
audiences. At least, that is my impression of where the matter stood
when it was last considered. There were also suggestions that
techniques documents were appropriate locations for such information
but that it was of secondary importance by comparison with the group's
underlying purpose of writing guidelines applicable to web sites in
general. Server-side techniques may address this issue to some extent.
 > 
 > 12. Accessibility vs. usability
A difficult issue, which is compounded by a lack of adequate
definitions. Query whether this distinction is a useful one. I don't
recall any groundbreaking arguments or insights in this area.
 > 
 > 13. Conformance - why do it? How to test?
There has been plenty of discussion of this point, with few concrete
proposals to date. At the checkpoint level, I think it is agreed that
conformance means satisfying the success criteria, and that the
technology/issue-specific documents will specify how to satisfy the
success criteria of each relevant checkpoint using the technology
under consideration, or (as in core techniques) under various
circumstances. We also know that conformance testing will be partially
automated by suitable software and that the Evaluation and Report
Language is available to encapsulate the results in a machine-readable format.
 > 
 > 14. Author and user needs conflict
Can this issue be clarified? I don't know what it means.
 > 
 > 15. User and user needs conflict
 > 
Same comment as per 14--I don't know what the issue is.

In the above summary I have tried to capture the essence of the
relevant arguments, where possible, on all sides.

Received on Thursday, 20 September 2001 23:47:31 UTC