Re: The Evaluation Techniques Strike Back

Great subject line. :)

I don't have any answers, but glad you are asking these questions.  Another 
thing to take into consideration is the work on WCAG 2.0. [1]  WCAG 2.0 has 
"success criteria" for each checkpoint - these are supposed to be testable.

The Techniques documents will have more testable criteria that look more 
like the AERT. [2]

Therefore, if WCAG provides what should be produced, ATAG should provide 
how to help the author get it there....not a new idea....just restating it 
FYI.  This follows along what we discussed last June in Amsterdam [3].

--wendy

[1] http://www.w3.org/WAI/GL/WCAG20/
[2] http://www.w3.org/WAI/GL/WCAG20/wcagtech020320.html
[3] 3:30 - 5:00 Techniques break-out sessions: HTML and Graphics/Multimedia
http://www.w3.org/WAI/GL/2001/06/21-f2f-minutes.html

At 05:20 PM 5/28/02, Jan Richards wrote:
>Hi all,
>
>It seems that the ATAG evaluation techniques are always on the agenda,
>but for some reason, we never quite get to them. As we put together an
>agenda for the Austria F2F, perhaps we should return to the subject and
>survey the numerous outstanding issues (which I will be placing in an
>issues page - linked from a new Evaluation Techniques sub-section on the
>AU homepage).
>
>As I see it, we need to come to a consensus on the following:
>
>1. Do we want the evaluation techniques to be a step-by-step procedure
>for people who are not familiar with ATAG? (i.e. "Open the file supplied
>and then perform X. If you see Y happen then the tool passes, if Z then
>it fails.")  Or will the evaluation techniques be intended to support
>evaluations by people familiar with ATAG (i.e. "Here are some things to
>keep in mind when assessing X in the tool")? - Either way, how can we
>avoid specifying things at the level of markup (which is best left to
>WCAG's whenever possible)? In other words, will we have to have a
>different set of tests for HTML, SVG?
>
>2. How will we take into account all the different kinds of tools? Will
>we break the evaluation techniques into groups by ATAG checkpoint? Will
>we end up with something like the AERT but with different tool types
>rather than different markup languages.?
>
>3. What will be the relationship be between the evaluation techniques
>and the implementation techniques? If we include implementation
>specifics in the tests (i.e. "To assess whether highlighting has been
>used in the dialog check whether any options are highlighted by ordering
>or color.") how will we avoid this being seen as limiting the creative
>flexibility of developers and becoming the de facto prescriptive
>requirements?
>
>4. How can we support evaluation of checkpoints dealing with accessible
>output (for WCAG P1, P2 and P3) when checking tools are not up to the
>task yet? How much testing of output is sufficient?
>
>5. How can we support checking of the accessibility interface
>checkpoints in guideline 7? Will we provide pointers to platform
>specific standards, "rules of thumb" for checking interfaces, etc.
>
>6. How should we use the QA work
>(http://www.w3.org/TR/2002/WD-qaframe-spec-20020515/)
>
>
>Answers? More questions? Comments?
>
>--
>Cheers,
>Jan
>
>/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
>
>Jan Richards
>UI Design Specialist
>Adaptive Technology Resource Centre (ATRC)
>University of Toronto
>
>jan.richards@utoronto.ca
>Phone: (416) 946-7060
>Fax: (416) 971-2896
>
>/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\

-- 
wendy a chisholm
world wide web consortium
web accessibility initiative
seattle, wa usa
/--

Received on Tuesday, 28 May 2002 19:02:20 UTC