W3C home > Mailing lists > Public > w3c-wai-au@w3.org > October to December 2012

try at test for ATAG2.0 A.3.1.5 - more work needed

From: Boland Jr, Frederick E. <frederick.boland@nist.gov>
Date: Fri, 16 Nov 2012 15:54:26 -0500
To: "w3c-wai-au@w3.org" <w3c-wai-au@w3.org>
Message-ID: <D7A0423E5E193F40BE6E94126930C4930BAB59675C@MBCLUSTER.xchange.nist.gov>
A lot of issues with this draft test..

NOTE: may need definition of "customized" (customized for -what?), maybe also for "keyboard command" (I assume keyboard command assumes existence of keyboard interface on which those keyboard commands operate)

---------------------beginning of test------------------------------------

1.       Document whether the platform on which the authoring tool runs supports a keyboard interface.  If not then SKIP (SC is N/A).  If so then proceed.

2.       For the authoring tool, document all  commands invoked via this keyboard interface (to accomplish a particular task?) from Step 1 (from user experience or authoring tool documentation).  If  no such commands, then SKIP (SC is N/A).  If there is at least one such command, then proceed.

3.       For each keyboard command from Step 2 (loop), perform Steps 3 through 6 for the authoring tool.

4.       Attempt to accomplish an appropriate task via the command (may involve editing content, interaction with authoring tool interface, or something else.).  Record the "state" of the authoring tool "before" and "after" execution of each command.  Document the "specification" (testing environment) of the command execution.  Then go back to the "before state" if possible.

5.       Change the command according to the tester's specifications (which are different from the specifications of the testing environment documented from Step 4).   Document how the command was altered and the different tester specifications for the command from this Step.  Then attempt to accomplish the same task from Step 4 using the altered command according to this step.  Record the "before" state and the "after" state as in Step 4.

6.       If the difference between "before" and "after" states from Step 5 is not the same as the difference between "before" and "after" states from Step 4, then the command is in fact not "customized", and this authoring tool fails this SC on this platform; exit.  Otherwise continue with loop for next command (go back to Step 3) until no more such commands to consider.

7.       If no exit up to this point, authoring tool passes this SC for this platform (all commands considered evaluate to "customizable").

----------------------------end of test--------------------------------------------

ISSUE: Assume that a "customized" command should perform exactly the same function as a command not "customized" (according to equality of "before" and "after" states) , or the command is in fact not "customized".

ISSUES: how many different specifications should be tested for passage of this SC (is one sufficient)?  How does one express a specification?

Thanks and best wishes
Tim Boland NIST
Received on Friday, 16 November 2012 20:54:50 UTC

This archive was generated by hypermail 2.3.1 : Tuesday, 6 January 2015 21:40:03 UTC