Test Plan for ATAG2.0 (updated based on 25 June 04 ATAG2.0 WD-also referencing updated WCAG2.0 WD 2 Jun 04 and ISO16071: 2002(E)) - August 26 2004

NOTE: Tool designer is assumed knowledgable of tool; author is one who uses tool (no prior knowledge of tool assumed, but can be the same as tool designer, and can be many authors performing tests on a single tool). Test plan may include combinations of machine-testing and human-controlled testing. "Form" following includes a public listing of pertinent test data (including test number, success criteria number, individuals involved, and results.)

Conformance Level A

  1. Success Criteria 1.1: The authoring interface must conform to ISO16071 level 1 (ISO16071 RELATIVE PRIORITY)

    Test Plan

    1. tool designer lists capabilities of authoring tool interface on form (and other documentation necessary for author to use)
    2. author lists ISO16071 part 7 core-application capabilities for testing on form
    3. author lists core-application elements of ISO16071 Part 7 on form
    4. author lists which core-application elements of ISO16071 Part 7 are passed by the authoring interface on form

  2. Success Criteria 1.2: At least one editing method must conform to ISO16071 level 1 for each element and object property editable by the tool (ISO16071 RELATIVE PRIORITY)

    Test Plan

    1. tool designer lists on form all editing methods to be considered (available) for the tool
    2. tool designer lists on form all elements editable by authoring tool
    3. tool designer lists on form all object properties editable by authoring tool
    4. author lists on form the ISO16071 core-application Part 7 testing criteria
    5. author tries an previously-listed editing method involving before-referenced elements and object properties against ISO16071 core-application Part 7 testing criteria and lists on form which ISO16071 core-application Part 7 criteria are passed as well as the editing method used and the elements and object properties edited

  3. Success Criteria 1.3:
    1. All editing views must display text equivalents for any non-text content
    2. All editing view must either respect operating system display settings (for color, contrast, size, and font) or, from within the tool, provide a means of changing color, contrast, size, and font, without affecting the content markup)

    Test Plan

    1. Tool designer defines all editing views supported by the tool on form (as well as non-text content types supported?)
    2. Tool designer defines all possible operating system display settings for color, contrast, size and font supported by tool on form
    3. the above-mentioned editing views are each tested by author for random (all?) samples of non-text content to make sure text equivalent is generated for each sample, and places result on form
    4. the above-mentioned editing views are each tested by author against known settings for color, contrast, size, and font, if this choice is checked on form
    5. the authoring tool is tested by author to see if color, contrast, size, and font can each be changed (to known, testable verifiable) values on the above-mentioned editing views for a reference piece of content, if this choice is checked on form

  4. Success criteria 2.1: All markup string written automatically be the tool (i.e., not authored "by hand") must conform to the applicable markup language specification

    Test Plan

    1. on form, tool designer defines how markup strings conforming to markup language specifications are automatically generated by the tool
    2. on form, tool designer specifies which markup language specifications are supported by the tool
    3. under author control, authoring tool generates a series of markup strings automatically
    4. author checks that each markup string is verified against the appropriate language specification using defined mechanisms - author lists on form markup string generated and conformance verification for that string
    5. if no pre-existing mechanism defined for conformance, author will explain on form how each markup string conforms to each referenced specification mentioned before

  5. Success Criteria 2.2:
    1. The authoring tool must support at least one WCAG-capable format for each Web content type produced
    2. When format selection is automatic, the selected format must be WCAG-capable

    Test Plan

    1. Tool designer lists on form all web content types produced by the tool
    2. Tool designer lists on form all WCAG-capable formats supported for each content type mentioned previously
    3. Tool designer specifies on form whether any format selections of the tool are automatic
    4. Author verifies on form that all of the following are true for each supported/selected format mentioned before (or N/A if not applicable?):
      1. For every referenced format, all text equivalents in that format are explicitly associated with non-text content in that format, except when the non-text content is intended to create a specific sensory experience
      2. For every referenced format, all non text-content in that format that is designed to create a specific sensory experience has a text-label or text-description explicitly associated with it
      3. For every referenced format, an audio description of all visual events is provided for all audio-visual media supporting that format.
      4. For every referenced format, captions are provided for all significant dialogue and sounds in time-dependent material supported by that format
      5. For every referenced format, all descriptions and captions supported by that format are synchronized with the events they represent.
      6. For every referenced format, if the Web content of that format is real-time video with audio, real-time captions are provided in every instance
      7. For every referenced format, if the Web content of that format is real-time non-interactive video, a substitute is provided that conforms to items 1 and 2 of this list, or a link is provided to a substitute that conforms to items 1 and 2 of this list
      8. For every referenced format, if a presentation in that format that contains only audio or video requires users to respond interactively at specific times during the presentation, then a synchronized equivalent (audio, visual or text) presentation is always provided
      9. For every referenced format, all structures and relationships in that format can be derived programmatically
      10. For every referenced format, all emphasis supported by that format can be derived programmatically
      11. For every referenced format, all information in that format presented throught color is also available without color
      12. For every referenced format, all text supported by that format that is presented over a background is electronically available so that it could be re-presented in a form that allows the text to be distinguished from the background
      13. For every referenced format, all of the functionality of the content in that format, where the functionality or its outcome can be described in a sentence, is operable through a keyboard or keyboard interface.
      14. For every referenced format, all content in that format is designed so that time limits are not an essential part of interaction, or at least one of the following is always true for each time limit: the user is allowed to deactivate the time limit, or the user is allowed to adjust the time limit over a wide range which is at least 10 times the length of the default setting, or the user is warned before time expires and allowed to extend the time limit with a simple action, or the time limit is an important part of a real-time event and no alternative to the time limit is possible, or the time limit is part of an activity where timing is essential, and time limits can not be extended further without invalidating the activity
      15. For every referenced format, all content in that format that violates General Flash Threshold or Red Flash Threshold is always marked in a way that the user can access prior to its appearance
      16. For every referenced format, the natural language corresponding to that format of each document as a whole can be identified by automated tools
      17. For every referenced format, the meaning of all abbreviations and acronyms supported by that format can be programmatically located in all instances
      18. For every referenced format, all instances of extreme changes of content in that format are always implemented in a manner that can be programmatically identified
      19. For every referenced format, except where a site has documented that a specification was violated for backward compatibility or compatibility with assistive technology, the technology supporting that format has passed validity tests for the version of the technology in use
      20. For every referenced format, structural elements and attributes are always used as defined in the specification
      21. For every referenced format, at least one plug-in required to access the content supporting that format conforms to at least the default set of conformance requirements of the User Agent Accessibility Guidelines (UAAG) 1.0 at Level A plus the sets of requirements (a) through (j) (following) that apply. If required plug-ins are not accessible, an alternative solution is always provided that conforms to WCAG 2.0. If inaccessible plug-ins are available, then a method for obtaining an accessible plug-in is always provided from the content of that format.
      22. For every referenced format, any programmatic user interface components of the content of that format always conform to at least the default set of conformance requirements of the UAAG 1.0 at Level A plus the sets of requirements (a) through (j) (following) that apply. If the custom user interfaces cannot be made accessible, an alternative solution is always provided that meets WCAG 2.0 (including this provision) to the level claimed. Requirements (a) through (j)
        1. If the application renders visual text, it should conform to the VisualText checkpoints.
        2. If the application renders images, it should conform to the Image checkpoints.
        3. If the application renders animations, it should conform to the Animation checkpoints.
        4. If the application renders video, it should conform to the Video checkpoints.
        5. If the application renders audio, it should conform to the Audio checkpoints.
        6. If the application performs its own event handling, it should conform to the Events checkpoints.
        7. If the application implements a selection mechanism, it should conform to the Selection checkpoints.
        8. The application should support keyboard access per UAAG 1.0 checkpoints 1.1 and 6.7.
        9. If the application implements voice or pointer input, it should conform to the Input Modality checkpoints.

  6. Success Criteria 2.3: Tools must always meet at least one of the following:
    1. generate accessible content automatically
    2. provide a method for authoring "by hand"
    3. provide the author with accessible options for every authoring task

    Test Plan

    1. tool designer declares the capability of their authoring tool on a form by checking one/more of three boxes corresponding to three items in success criteria
    2. if appropriate item is checked, tool designer explains on form how function is accomplished
    3. author states on the form whether the tool successfully generates accessible content automatically, , if that item is checked on the form, and if so, what times it does this, OR
    4. author states on the form whether the tool successfully provides a method for authoring "by hand", , if that item is checked on the form, and if so, what times it does this, OR
    5. author states on the form whether the tool provides accessible options for every authoring task, , if that item is checked on the form, and if so, what times it does this
    6. for each of the above, author provides a description of the authoring task and the resulting content

    ------------------------

  7. Success Criteria 2.5: Unless the author explicitly instructs the authoring tool otherwise, all content generated by the tool must satisfy all of the WCAG2.0 06/02/04 WD Level 1 success criteria(WCAG RELATIVE PRIORITY):

    Test Plan

    1. Tool designer enters on form how content is generated by the authoring tool.
    2. Using this info, without instructing the authoring tool otherwise, author (tester) tests all of the following (or enters N/A as appropriate) and enters results for each item on form:
      1. For all content generated by the tool, all text equivalents are explicitly associated with non-text content, except when the non-text content is intended to create a specific sensory experience
      2. For all content generated by the tool, all non text-content that is designed to create a specific sensory experience has a text-label or text-description explicitly associated with it
      3. For all content generated by the tool, an audio description of all visual events is provided for all audio-visual media.
      4. For all content generated by the tool, captions are provided for all significant dialogue and sounds in time-dependent material
      5. For all content generated by the tool, all descriptions and captions are synchronized with the events they represent.
      6. For all content generated by the tool, if the Web content is real-time video with audio, real-time captions are provided in every instance
      7. For all content generated by the tool, if the Web content is real-time non-interactive video, a substitute is provided that conforms to items 1 and 2 of this list, or a link is provided to a substitute that conforms to items 1 and 2 of this list
      8. For all content generated by the tool, if a presentation that contains only audio or video requires users to respond interactively at specific times during the presentation, then a synchronized equivalent (audio, visual or text) presentation is always provided
      9. For all content generated by the tool, all structures and relationships can be derived programmatically
      10. For all content generated by the tool, all emphasis can be derived programmatically
      11. For all content generated by the tool, all information presented throught color is also available without color
      12. For all content generated by the tool, all text that is presented over a background is electronically available so that it could be re-presented in a form that allows the text to be distinguished from the background
      13. For all content generated by the tool, all of the functionality of the content, where the functionality or its outcome can be described in a sentence, is operable through a keyboard or keyboard interface.
      14. For all content generated by the tool, all content is designed so that time limits are not an essential part of interaction, or at least one of the following is always true for each time limit: the user is allowed to deactivate the time limit, or the user is allowed to adjust the time limit over a wide range which is at least 10 times the length of the default setting, or the user is warned before time expires and allowed to extend the time limit with a simple action, or the time limit is an important part of a real-time event and no alternative to the time limit is possible, or the time limit is part of an activity where timing is essential, and time limits can not be extended further without invalidating the activity
      15. For all content generated by the tool, all content that violates General Flash Threshold or Red Flash Threshold is always marked in a way that the user can access prior to its appearance
      16. For all content generated by the tool, the natural language of each document as a whole can be identified by automated tools
      17. For all content generated by the tool, the meaning of all abbreviations and acronyms can be programmatically located in all instances
      18. For all content generated by the tool, all instances of extreme changes of content are always implemented in a manner that can be programmatically identified
      19. For all content generated by the tool, except where a site has documented that a specification was violated for backward compatibility or compatibility with assistive technology, the technology has passed validity tests for the version of the technology in use
      20. For all content generated by the tool, all structural elements and attributes are always used as defined in the specification
      21. For all content generated by the tool, at least one plug-in required to access the content conforms to at least the default set of conformance requirements of the User Agent Accessibility Guidelines (UAAG) 1.0 at Level A plus the sets of requirements (a) through (j) (following) that apply. If required plug-ins are not accessible, an alternative solution is always provided that conforms to WCAG 2.0. If inaccessible plug-ins are available, then a method for obtaining an accessible plug-in is always provided from the content.
      22. For all content generated by th tool, any programmatic user interface components of the content always conform to at least the default set of conformance requirements of the UAAG 1.0 at Level A plus the sets of requirements (a) through (j) (following) that apply. If the custom user interfaces cannot be made accessible, an alternative solution is always provided that meets WCAG 2.0 (including this provision) to the level claimed. Requirements (a) through (j)
        1. If the application renders visual text, it should conform to the VisualText checkpoints.
        2. If the application renders images, it should conform to the Image checkpoints.
        3. If the application renders animations, it should conform to the Animation checkpoints.
        4. If the application renders video, it should conform to the Video checkpoints.
        5. If the application renders audio, it should conform to the Audio checkpoints.
        6. If the application performs its own event handling, it should conform to the Events checkpoints.
        7. If the application implements a selection mechanism, it should conform to the Selection checkpoints.
        8. The application should support keyboard access per UAAG 1.0 checkpoints 1.1 and 6.7.
        9. If the application implements voice or pointer input, it should conform to the Input Modality checkpoints.
    3. Author explicitly instructs the authoring tool not to produce WCAG level 1-comformant content, and determines if tool complies?

  8. Success Criteria 2.6: Any web content (e.g., templates, clip art, multimedia objects, scripts, applets, example pages, etc.) that is bundled or preferentially licensed (i.e., better terms for users of the authoring tool than for the general public) must satisfy the Level 1 WCAG2.0 (06/02/04 draft) success criteria (WCAG RELATIVE PRIORITY):

    Test Plan

    1. Tool designer defines all content bundled or preferentially licensed (referred to hereafter as "web content") for the tool (and what preferential licensing means) on a form
    2. Using this info, author (tester) tests all of the following and enters result for each (or N/A as appropriate) on a form:
      1. For all web content generated by the tool, all text equivalents are explicitly associated with non-text content, except when the non-text content is intended to create a specific sensory experience
      2. For all web content generated by the tool, all non text-content that is designed to create a specific sensory experience has a text-label or text-description explicitly associated with it
      3. For all web content generated by the tool, an audio description of all visual events is provided for all audio-visual media.
      4. For all web content generated by the tool, captions are provided for all significant dialogue and sounds in time-dependent material
      5. For all web content generated by the tool, all descriptions and captions are synchronized with the events they represent.
      6. For all web content generated by the tool, if the Web content is real-time video with audio, real-time captions are provided in every instance
      7. For all web content generated by the tool, if the Web content is real-time non-interactive video, a substitute is provided that conforms to items 1 and 2 of this list, or a link is provided to a substitute that conforms to items 1 and 2 of this list
      8. For all web content generated by the tool, if a presentation that contains only audio or video requires users to respond interactively at specific times during the presentation, then a synchronized equivalent (audio, visual or text) presentation is always provided
      9. For all web content generated by the tool, all structures and relationships can be derived programmatically
      10. For all web content generated by the tool, all emphasis can be derived programmatically
      11. For all web content generated by the tool, all information presented throught color is also available without color
      12. For all web content generated by the tool, all text that is presented over a background is electronically available so that it could be re-presented in a form that allows the text to be distinguished from the background
      13. For all web ontent generated by the tool, all of the functionality of the content, where the functionality or its outcome can be described in a sentence, is operable through a keyboard or keyboard interface.
      14. For all web content generated by the tool, all content is designed so that time limits are not an essential part of interaction, or at least one of the following is always true for each time limit: the user is allowed to deactivate the time limit, or the user is allowed to adjust the time limit over a wide range which is at least 10 times the length of the default setting, or the user is warned before time expires and allowed to extend the time limit with a simple action, or the time limit is an important part of a real-time event and no alternative to the time limit is possible, or the time limit is part of an activity where timing is essential, and time limits can not be extended further without invalidating the activity
      15. For all web content generated by the tool, all content that violates General Flash Threshold or Red Flash Threshold is always marked in a way that the user can access prior to its appearance
      16. For all web content generated by the tool, the natural language of each document as a whole can be identified by automated tools
      17. For all web content generated by the tool, the meaning of all abbreviations and acronyms can be programmatically located in all instances
      18. For all web content generated by the tool, all instances of extreme changes of content are always implemented in a manner that can be programmatically identified
      19. For all web content generated by the tool, except where a site has documented that a specification was violated for backward compatibility or compatibility with assistive technology, the technology has passed validity tests for the version of the technology in use
      20. For all web content generated by the tool, all structural elements and attributes are always used as defined in the specification
      21. For all web content generated by the tool, at least one plug-in required to access the content conforms to at least the default set of conformance requirements of the User Agent Accessibility Guidelines (UAAG) 1.0 at Level A plus the sets of requirements (a) through (j) (following) that apply. If required plug-ins are not accessible, an alternative solution is always provided that conforms to WCAG 2.0. If inaccessible plug-ins are available, then a method for obtaining an accessible plug-in is always provided from the content.
      22. For all web content generated by the tool, if any programmatic user interface components of the content always conform to at least the default set of conformance requirements of the UAAG 1.0 at Level A plus the sets of requirements (a) through (j) (following) that apply. If the custom user interfaces cannot be made accessible, an alternative solution is always provided that meets WCAG 2.0 (including this provision) to the level claimed. Requirements (a) through (j)
        1. If the application renders visual text, it should conform to the VisualText checkpoints.
        2. If the application renders images, it should conform to the Image checkpoints.
        3. If the application renders animations, it should conform to the Animation checkpoints.
        4. If the application renders video, it should conform to the Video checkpoints.
        5. If the application renders audio, it should conform to the Audio checkpoints.
        6. If the application performs its own event handling, it should conform to the Events checkpoints.
        7. If the application implements a selection mechanism, it should conform to the Selection checkpoints.
        8. The application should support keyboard access per UAAG 1.0 checkpoints 1.1 and 6.7.
        9. If the application implements voice or pointer input, it should conform to the Input Modality checkpoints.

  9. Success Criteria 3.1:
    1. When the actions of the author risk creating accessibility problems (not satisfying any of the WCAG2.0 (06/02/04) Level 1 success criteria), the tool must introduce the appropriate accessible authoring practice.
    2. The intervention must occur at least once before completion of authoring (e.g., final save, publishing, etc.)
    (WCAG RELATIVE PRIORITY):

    Test Plan

    1. tool designer describes all intervention features of tool and accessibility issues prompting such intervention on a form
    2. tool designer describes how a user could configure the schedule of intervention
    3. Using this info, author tests all of the following and enters results on a form (or enters N/A as appropriate for each item) (NOTE: x is a small number that is assumed always to occur before completion of authoring):
      1. if all text equivalents are not explicitly associated with non-text content, except when the non-text content is intended to create a specific sensory experience, the author is notified within x seconds of the event
      2. if all non text-content that is designed to create a specific sensory experience does not have a text-label or text-description explicitly associated with it, the author is notified within x seconds of the event
      3. if an audio description of all visual events is not provided for all audio-visual media, the author is notified within x seconds of the event
      4. if captions are not provided for all significant dialogue and sounds in time-dependent material, the author is notified within x seconds of the event
      5. if all descriptions and captions are not synchronized with the events they represent, the author is notified within x seconds of the event
      6. if the Web content is real-time video with audio, and real-time captions are not provided in every instance, the author is notified within x seconds of the event
      7. if the Web content is real-time non-interactive video, and a substitute is not provided that conforms to items 1 and 2 of this list, or a link is not provided to a substitute that conforms to items 1 and 2 of this list, the author is notified within x seconds of the event
      8. if a presentation that contains only audio or video requires users to respond interactively at specific times during the presentation, and a synchronized equivalent (audio, visual or text) presentation is not always provided, the author is notified within x seconds of the event
      9. if all structures and relationships cannot be derived programmatically, the author is notified within x seconds of the event
      10. if all emphasis cannot be derived programmatically, the author is notified within x seconds of the event
      11. if all information presented through color is not also available without color, the author is notified within x seconds of the event
      12. if all text that is presented over a background is not electronically available so that it could be re-presented in a form that allows the text to be distinguished from the background, the author is notified within x seconds of the event
      13. if all of the functionality of the content, where the functionality or its outcome can be described in a sentence, is not operable through a keyboard or keyboard interface, the author is notified within x seconds of the event
      14. if all content is not designed so that time limits are not an essential part of interaction, or at least one of the following is not always true for each time limit: the user is allowed to deactivate the time limit, or the user is allowed to adjust the time limit over a wide range which is at least 10 times the length of the default setting, or the user is warned before time expires and allowed to extend the time limit with a simple action, or the time limit is an important part of a real-time event and no alternative to the time limit is possible, or the time limit is part of an activity where timing is essential, and time limits can not be extended further without invalidating the activity, then the author is notified within x seconds of the event
      15. if all content that violates General Flash Threshold or Red Flash Threshold is not always marked in a way that the user can access prior to its appearance, the author is notified within x seconds of the event
      16. if the natural language of each document as a whole cannot be identified by automated tools, the author is notified within x seconds of the event
      17. if the meaning of all abbreviations and acronyms cannot be programmatically located in all instances, the author is notified within x seconds of the event
      18. if all instances of extreme changes of content are not always implemented in a manner that can be programmatically identified, the author is notified within x seconds of the event
      19. if, except where a site has documented that a specification was violated for backward compatibility or compatibility with assistive technology, the technology has not passed validity tests for the version of the technology in use, the author is notified within x seconds of the event
      20. if all structural elements and attributes are not always used as defined in the specification, the author is notified within x seconds of the event
      21. if no plug-in required to access the content conforms to at least the default set of conformance requirements of the User Agent Accessibility Guidelines (UAAG) 1.0 at Level A plus the sets of requirements (a) through (j) (following) that apply. If required plug-ins are not accessible, an alternative solution is not always provided that conforms to WCAG 2.0. If inaccessible plug-ins are available, then a method for obtaining an accessible plug-in is not always provided from the content. If any of this is true, the author is notified within x seconds of the event
      22. if any programmatic user interface components of the content does not always conform to at least the default set of conformance requirements of the UAAG 1.0 at Level A plus the sets of requirements (a) through (j) (following) that apply. If the custom user interfaces cannot be made accessible, an alternative solution is not always provided that meets WCAG 2.0 (including this provision) to the level claimed. If any of this is true, the author is notified within x seconds of the event Requirements (a) through (j)
        1. If the application renders visual text, it should conform to the VisualText checkpoints.
        2. If the application renders images, it should conform to the Image checkpoints.
        3. If the application renders animations, it should conform to the Animation checkpoints.
        4. If the application renders video, it should conform to the Video checkpoints.
        5. If the application renders audio, it should conform to the Audio checkpoints.
        6. If the application performs its own event handling, it should conform to the Events checkpoints.
        7. If the application implements a selection mechanism, it should conform to the Selection checkpoints.
        8. The application should support keyboard access per UAAG 1.0 checkpoints 1.1 and 6.7.
        9. If the application implements voice or pointer input, it should conform to the Input Modality checkpoints.

  10. Success Criteria 3.2: The tool must provide a check (automated check, semi-automated check or manual check) for detecting violations of each Level 1 success criteria of WCAG2.0 (06/02/04 draft)(WCAG RELATIVE PRIORITY):

    Test Plan

    1. tool designer describes on form all checks supported by the tool for violation detection, what kind of check each is, and how authors would recognize or use info provided by each check
    2. given this info, author tests each following item and enters results on a form (or N/A for items as appropriate):
      1. if all text equivalents are not explicitly associated with non-text content, except when the non-text content is intended to create a specific sensory experience, the tool always provides a check for detection
      2. if all non text-content that is designed to create a specific sensory experience does not have a text-label or text-description explicitly associated with it, the tool always provides a check for detection
      3. if an audio description of all visual events is not provided for all audio-visual media, the tool always provides a check for detection
      4. if captions are not provided for all significant dialogue and sounds in time-dependent material, the tool always provides a check for detection
      5. if all descriptions and captions are not synchronized with the events they represent, the tool always provides a check for detection
      6. if the Web content is real-time video with audio, and real-time captions are not provided in every instance, the tool always provides a check for detection
      7. if the Web content is real-time non-interactive video, and a substitute is not provided that conforms to items 1 and 2 of this list, or a link is not provided to a substitute that conforms to items 1 and 2 of this list, the tool always provides a check for detection
      8. if a presentation that contains only audio or video requires users to respond interactively at specific times during the presentation, and a synchronized equivalent (audio, visual or text) presentation is not always provided, the tool always provides a check for detection
      9. if all structures and relationships cannot be derived programmatically, the tool always provides a check for detection
      10. if all emphasis cannot be derived programmatically, the tool always provides a check for detection
      11. if all information presented through color is not also available without color, the tool always provides a check for detection
      12. if all text that is presented over a background is not electronically available so that it could be re-presented in a form that allows the text to be distinguished from the background, the tool always provides a check for detection
      13. if all of the functionality of the content, where the functionality or its outcome can be described in a sentence, is not operable through a keyboard or keyboard interface, the tool always provides a check for detection
      14. if all content is not designed so that time limits are not an essential part of interaction, or at least one of the following is not always true for each time limit: the user is allowed to deactivate the time limit, or the user is allowed to adjust the time limit over a wide range which is at least 10 times the length of the default setting, or the user is warned before time expires and allowed to extend the time limit with a simple action, or the time limit is an important part of a real-time event and no alternative to the time limit is possible, or the time limit is part of an activity where timing is essential, and time limits can not be extended further without invalidating the activity, then the tool always provides a check for detection
      15. if all content that violates General Flash Threshold or Red Flash Threshold is not always marked in a way that the user can access prior to its appearance, the tool always provides a check for detection
      16. if the natural language of each document as a whole cannot be identified by automated tools, the tool always provides a check for detection
      17. if the meaning of all abbreviations and acronyms cannot be programmatically located in all instances, the tool always provides a check for detection
      18. if all instances of extreme changes of content are not always implemented in a manner that can be programmatically identified, the tool always provides a check for detection
      19. if, except where a site has documented that a specification was violated for backward compatibility or compatibility with assistive technology, the technology has not passed validity tests for the version of the technology in use, the tool always provides a check for detection
      20. if all structural elements and attributes are not always used as defined in the specification, the tool always provides a check for detection
      21. if no plug-in required to access the content conforms to at least the default set of conformance requirements of the User Agent Accessibility Guidelines (UAAG) 1.0 at Level A plus the sets of requirements (a) through (j) (following) that apply. If required plug-ins are not accessible, an alternative solution is not always provided that conforms to WCAG 2.0. If inaccessible plug-ins are available, then a method for obtaining an accessible plug-in is not always provided from the content. If any of this is true, the tool always provides a check for detection
      22. if any programmatic user interface components of the content do not always conform to at least the default set of conformance requirements of the UAAG 1.0 at Level A plus the sets of requirements (a) through (j) (following) that apply. If the custom user interfaces cannot be made accessible, an alternative solution is not always provided that meets WCAG 2.0 (including this provision) to the level claimed. If any of this is true, the tool always provides a check for detection Requirements (a) through (j)
        1. If the application renders visual text, it should conform to the VisualText checkpoints.
        2. If the application renders images, it should conform to the Image checkpoints.
        3. If the application renders animations, it should conform to the Animation checkpoints.
        4. If the application renders video, it should conform to the Video checkpoints.
        5. If the application renders audio, it should conform to the Audio checkpoints.
        6. If the application performs its own event handling, it should conform to the Events checkpoints.
        7. If the application implements a selection mechanism, it should conform to the Selection checkpoints.
        8. The application should support keyboard access per UAAG 1.0 checkpoints 1.1 and 6.7.
        9. If the application implements voice or pointer input, it should conform to the Input Modality checkpoints.

  11. Success Criteria 3.3: The tool must provide a repair (automated repair, semi-automated repair or manual repair) for correcting violations of each Level 1 success criteria of WCAG2.0 (06/02/04 draft)(WCAG RELATIVE PRIORITY):

    Test Plan

    1. tool designer describes on form all repairs provided for violation correction by the tool, and what kind of repair it is
    2. given this info, author tests each item below to ensure repair successfully made (or enters N/A for item as appropriate), and enters results on form:
      1. if all text equivalents are not explicitly associated with non-text content, except when the non-text content is intended to create a specific sensory experience, the tool always provides a repair for correction
      2. if all non text-content that is designed to create a specific sensory experience does not have a text-label or text-description explicitly associated with it, the tool always provides a repair for correction
      3. if an audio description of all visual events is not provided for all audio-visual media, the tool always provides a repair for correction
      4. if captions are not provided for all significant dialogue and sounds in time-dependent material, the tool always provides a repair for correction
      5. if all descriptions and captions are not synchronized with the events they represent, the tool always provides a repair for correction
      6. if the Web content is real-time video with audio, and real-time captions are not provided in every instance, the tool always provides a repair for correction
      7. if the Web content is real-time non-interactive video, and a substitute is not provided that conforms to items 1 and 2 of this list, or a link is not provided to a substitute that conforms to items 1 and 2 of this list, the tool always provides a repair for correction
      8. if a presentation that contains only audio or video requires users to respond interactively at specific times during the presentation, and a synchronized equivalent (audio, visual or text) presentation is not always provided, the tool always provides a repair for correction
      9. if all structures and relationships cannot be derived programmatically, the tool always provides a repair for correction
      10. if all emphasis cannot be derived programmatically, the tool always provides a repair for correction
      11. if all information presented through color is not also available without color, the tool always provides a repair for correction
      12. if all text that is presented over a background is not electronically available so that it could be re-presented in a form that allows the text to be distinguished from the background, the tool always provides a repair for correction
      13. if all of the functionality of the content, where the functionality or its outcome can be described in a sentence, is not operable through a keyboard or keyboard interface, the tool always provides a repair for correction
      14. if all content is not designed so that time limits are not an essential part of interaction, or at least one of the following is not always true for each time limit: the user is allowed to deactivate the time limit, or the user is allowed to adjust the time limit over a wide range which is at least 10 times the length of the default setting, or the user is warned before time expires and allowed to extend the time limit with a simple action, or the time limit is an important part of a real-time event and no alternative to the time limit is possible, or the time limit is part of an activity where timing is essential, and time limits can not be extended further without invalidating the activity, then the tool always provides a repair for correction
      15. if all content that violates General Flash Threshold or Red Flash Threshold is not always marked in a way that the user can access prior to its appearance, the tool always provides a repair for correction
      16. if the natural language of each document as a whole cannot be identified by automated tools, the tool always provides a repair for correction
      17. if the meaning of all abbreviations and acronyms cannot be programmatically located in all instances, the tool always provides a repair for correction
      18. if all instances of extreme changes of content are not always implemented in a manner that can be programmatically identified, the tool always provides a repair for correction
      19. if, except where a site has documented that a specification was violated for backward compatibility or compatibility with assistive technology, the technology has not passed validity tests for the version of the technology in use, the tool always provides a repair for correction
      20. if all structural elements and attributes are not always used as defined in the specification, the tool always provides a repair for correction
      21. if no plug-in required to access the content conforms to at least the default set of conformance requirements of the User Agent Accessibility Guidelines (UAAG) 1.0 at Level A plus the sets of requirements (a) through (j) (following) that apply. If required plug-ins are not accessible, an alternative solution is not always provided that conforms to WCAG 2.0. If inaccessible plug-ins are available, then a method for obtaining an accessible plug-in is not always provided from the content. If any of this is true, the tool always provides a repair for correction
      22. if any programmatic user interface components of the content do not always conform to at least the default set of conformance requirements of the UAAG 1.0 at Level A plus the sets of requirements (a) through (j) (following) that apply. If the custom user interfaces cannot be made accessible, an alternative solution is not always provided that meets WCAG 2.0 (including this provision) to the level claimed. If any of this is true, the tool always provides a repair for correction Requirements (a) through (j)
        1. If the application renders visual text, it should conform to the VisualText checkpoints.
        2. If the application renders images, it should conform to the Image checkpoints.
        3. If the application renders animations, it should conform to the Animation checkpoints.
        4. If the application renders video, it should conform to the Video checkpoints.
        5. If the application renders audio, it should conform to the Audio checkpoints.
        6. If the application performs its own event handling, it should conform to the Events checkpoints.
        7. If the application implements a selection mechanism, it should conform to the Selection checkpoints.
        8. The application should support keyboard access per UAAG 1.0 checkpoints 1.1 and 6.7.
        9. If the application implements voice or pointer input, it should conform to the Input Modality checkpoints.

--------------------------------------------------------------------------------------------------

Conformance Level AA (after doing Conformance Level A)

  1. Success Criteria 1.1: The authoring interface must conform to ISO16071 Level 2 (ISO16071 RELATIVE PRIORITY)

    Test Plan

    1. tool designer lists capabilities of authoring tool interface on form (and other documentation necessary for author to use)
    2. author lists ISO16071 part 7 primary-application capabilities for testing on form
    3. author lists primary-application elements of ISO16071 Part 7 on form
    4. author lists which primary-application elements of ISO16071 Part 7 are passed by the authoring interface on form

  2. Success Criteria 1.2: At least one editing method must conform to ISO16071 Level 2 for each element and object property editable by the tool (ISO16071 RELATIVE PRIORITY)

    Test Plan

    1. tool designer lists on form all editing methods to be considered (available) for the tool
    2. tool designer lists on form all elements editable by authoring tool
    3. tool designer lists on form all object properties editable by authoring tool
    4. author lists on form the ISO16071 primary-application Part 7 testing criteria
    5. author tries an previously-listed editing method involving before-referenced elements and object properties against ISO16071 primary-application Part 7 testing criteria and lists on form which ISO16071 primary-application Part 7 criteria are passed as well as the editing method used and the elements and object properties edited

  3. Success Criteria 1.4:
    1. In any element hierarchy, the author must be able, with a device-independent action, to move editing focus from any structural element to any element immediately above, immediately below, immediately below or in the same level in the hierarchy
    2. In any element hierarchy, the author must be able, with a device-independent action, to select, copy, cut and paste any element, and its content

    Test Plan

    1. Tool designer defines on form how editing focus is moved (with a device-independent action) for the tool (and document structures supported?)
    2. For every such element hierarchy and editing focus, the tester (author) verifies that focus is correctly moved to element immediately above in hierarchy and enters results on form
    3. For every such element hierarchy and editing focus, the tester (author) verifies that focus is correctly moved (with device-independent action) to element immediately below in hierarchy and enters results on form
    4. For every such element hierarchy and editing focus, the tester (author) verifies that focus is correctly moved (with device-independent action) to element immediately to left or right in hierarchy and enters results on form
    5. Tool designer defines element hierarchies supported on form?
    6. Given such an element hierarchy, the tester (author) verifies that every element (with its content) in that hierarchy can be selected (with device-independent action) by the tool, and enters results on form.
    7. Given such an element hierarchy, the tester (author) verifies that every element (with its content) in that hierarchy can be copied (with device-independent action) by the tool, and enters results on form.
    8. Given such an element hierarchy, the tester (author) verifies that every element (with its content) in that hierarchy can be cut (with device-independent action) by the tool, and enters results on form.
    9. Given such an element hierarchy, the tester (author) verifies that every element (with its content) in that hierarchy can be pasted (with device-independent action) by the tool, and enters results on form.

  4. Success criteria 1.5:
    1. The authoring tool must have a search function for all editing views
    2. The author must be able to search for text within all text equivalents of any rendered non-text content
    3. The author must be able to specify whether to search content, markup, or both

    Test Plan

    1. The tool designer defines all editing views supported by the tool, and all search functions supported by the tool, on a form
    2. Given this information, the tester (author) verifies that at least one search function exists for a sample of editing views as defined above, and enters results on form
    3. Given this information, the tester (author) verifies that all samples of text within every text equivelent are searched for successfully, and enters results on form
    4. Given this information, the author verifies that the author is successfully able to distinguish searching for content vs. searching for markup, and enters results on a form.
    5. Given this information, the tester (author) verifies that all searching mentioned previously is performed successfully, and enters results on a form.

  5. Success Criteria 2.4:
    1. During all transformations and conversions, all unrecognized markup and accessibility information must be preserved, unless prevented by limitations of the target format
    2. When unrecognized markup or accessibility information cannot be preserved during a conversion or transformation, the author must be notified before any change is made.

    Test Plan

    1. tool designer defines all transformations and conversions possible using their tool on a form
    2. tool designer defines all accessibility information that can be provided by their tool on a form
    3. tool designer defines example of unrecognized markup that can be generated by their tool on a form
    4. tool designer lists any limitations or restrictions imposed by the target format on a form
    5. author tries some sample transformations, and defines any accessibility information and encountered unrecognized markup pertinent to that transformation on a form; if no accessibility information or unrecognized markup exists, it should be so stated
    6. author compares the defined accessibility information and encountered unrecognized markup after to that before to make sure the same information exists, and presents the results on a form, for each transformation
    7. author would state on the form whether prior author notification was given if the previous tests gave negative results (prior meaning before the transformation was attempted)
    8. author verifies that if prior notification was made, author was given choice to abort transformation, and if author chose to abort, the transformation was in fact not attempted; this info is presented on a form

  6. Success Criteria 3.4:
    1. When the author inserts an unrecognized non-text object, the tool must not insert an automatically generated text equivalent (e.g. label generated from the file name)
    2. When the author inserts a non-text object for which the tool has a previously authored equivalent (i.e. created by the author, tool designer, pre-authored content developer, etc.), but the function of the object is not known with certainty, the tool must prompt the author to confirm insertion of the equivalent. However, where the function of the non-text object is known with certainty (e.g. "home button" on a navigation bar, etc.), the tool may automatically insert the equivalent.

    Test Plan

    1. tool designer defines capability of tool re: handling of unrecognized non-text objects on a form
    2. Author in editing using tool inserts unrecognized non-text object
    3. Author verifies for every such insertion that tool does not insert text equivalent and presents this info on form
    4. tool designer defines all known non-text objects for which text equivalents exists, and gives on a form the object, text equivalent (or link to), and function of the object, on a form
    5. author inserts non-text object on list mentioned before, verifies that tool automatically inserts correct text equivalent, and provides this info on a form
    6. author inserts non-text object not on list mentioned before, and verifies that tool does not insert equivalent, but prompts the author before equivalent is inserted; this info is provided on a form
    7. author verifies that for such prompting, if author accepts, tool does in fact insert text equivalent, and if author declines, the tool does not insert text equivalent; this info is provided on a form

  7. Success Criteria 3.7: All features that play a role in creating accessible content must be documented in the help mechanism.

    Test Plan

    1. tool designer lists on form all features assisting accessibility of content generated by the tool, and where those features are in the help mechanism
    2. tool designer lists on form nature of help mechanism (documentation) for the tool, and how to use help mechanism.
    3. author verifies on form that for every feature listed, feature is found in help mechanism at the correct location checking that feature is included in the documentation.
    4. Author verifies on form for each feature a level of understanding of the feature's capabilities

  8. Success Criteria 3.8:
    1. All examples of markup code and views of the authoring interface (dialog screenshots, etc.) must satisfy all Level 1 and 2 success criteria below of WCAG2.0 (06/02/04 draft), regardless of whether the examples are intended to demonstrate accessibility authoring practices
    2. All descriptions of authoring processes must integrate the steps needed to create accessible content

    Test Plan

    1. The tool designer describes authoring processes of the tool on a form
    2. Given this info, the author (tester) verifies that all such descriptions always integrate all steps necessary to create accessible content
    3. The tool designer defines all types of markup code and all views of the authoring interface supported by the tool on a form
    4. Given this information, author (tester) successfully verifies each following item (or N/A) and enters results on a form:
      1. For all markup code and all authoring interface views generated by the tool, all text equivalents are explicitly associated with non-text content, except when the non-text content is intended to create a specific sensory experience
      2. For all markup code and all authoring interface views generated by the tool, all non text-content that is designed to create a specific sensory experience has a text-label or text-description explicitly associated with it
      3. For all markup code and all authoring interface views generated by the tool, an audio description of all visual events is provided for all audio-visual media.
      4. For all markup code and all authoring interface views generated by the tool, captions are provided for all significant dialogue and sounds in time-dependent material
      5. For all markup code and all authoring interface views generated by the tool, all descriptions and captions are synchronized with the events they represent.
      6. For all markup code and all authoring interface views generated by the tool, if the Web content is real-time video with audio, real-time captions are provided in every instance
      7. For all markup code and all authoring interface views generated by the tool, if the Web content is real-time non-interactive video, a substitute is provided that conforms to items 1 and 2 of this list, or a link is provided to a substitute that conforms to items 1 and 2 of this list
      8. For all markup code and all authoring interface views generated by the tool, if a presentation that contains only audio or video requires users to respond interactively at specific times during the presentation, then a synchronized equivalent (audio, visual or text) presentation is always provided
      9. For all markup code and all authoring interface views generated by the tool, all structures and relationships can be derived programmatically
      10. For all markup code and all authoring interface views generated by the tool, all emphasis can be derived programmatically
      11. For all markup code and all authoring interface views generated by the tool, all information presented throught color is also available without color
      12. For all markup code and all authoring interface views generated by the tool, all text that is presented over a background is electronically available so that it could be re-presented in a form that allows the text to be distinguished from the background
      13. For all markup code and all authoring interface views generated by the tool, all of the functionality of the content, where the functionality or its outcome can be described in a sentence, is operable through a keyboard or keyboard interface.
      14. For all markup code and all authoring interface views generated by the tool, all content is designed so that time limits are not an essential part of interaction, or at least one of the following is always true for each time limit: the user is allowed to deactivate the time limit, or the user is allowed to adjust the time limit over a wide range which is at least 10 times the length of the default setting, or the user is warned before time expires and allowed to extend the time limit with a simple action, or the time limit is an important part of a real-time event and no alternative to the time limit is possible, or the time limit is part of an activity where timing is essential, and time limits can not be extended further without invalidating the activity
      15. For all markup code and all authoring interface views generated by the tool, all content that violates General Flash Threshold or Red Flash Threshold is always marked in a way that the user can access prior to its appearance
      16. For all markup code and all authoring interface views generated by the tool, the natural language of each document as a whole can be identified by automated tools
      17. For all markup code and all authoring interface views generated by the tool, the meaning of all abbreviations and acronyms can be programmatically located in all instances
      18. For all markup code and all authoring interface views generated by the tool, all instances of extreme changes of content are always implemented in a manner that can be programmatically identified
      19. For all markup code and all authoring interface views generated by the tool, except where a site has documented that a specification was violated for backward compatibility or compatibility with assistive technology, the technology has passed validity tests for the version of the technology in use
      20. For all markup code and all authoring interface views generated by the tool, all structural elements and attributes are always used as defined in the specification
      21. For all markup code and all authoring interface views generated by the tool, at least one plug-in required to access the content conforms to at least the default set of conformance requirements of the User Agent Accessibility Guidelines (UAAG) 1.0 at Level A plus the sets of requirements (a) through (j) (following) that apply. If required plug-ins are not accessible, an alternative solution is always provided that conforms to WCAG 2.0. If inaccessible plug-ins are available, then a method for obtaining an accessible plug-in is always provided from the content.
      22. For all markup code and all authoring interface views generated by the tool, any programmatic user interface components of the content always conform to at least the default set of conformance requirements of the UAAG 1.0 at Level A plus the sets of requirements (a) through (j) (following) that apply. If the custom user interfaces cannot be made accessible, an alternative solution is always provided that meets WCAG 2.0 (including this provision) to the level claimed. Requirements (a) through (j)
        1. If the application renders visual text, it should conform to the VisualText checkpoints.
        2. If the application renders images, it should conform to the Image checkpoints.
        3. If the application renders animations, it should conform to the Animation checkpoints.
        4. If the application renders video, it should conform to the Video checkpoints.
        5. If the application renders audio, it should conform to the Audio checkpoints.
        6. If the application performs its own event handling, it should conform to the Events checkpoints.
        7. If the application implements a selection mechanism, it should conform to the Selection checkpoints.
        8. The application should support keyboard access per UAAG 1.0 checkpoints 1.1 and 6.7.
        9. If the application implements voice or pointer input, it should conform to the Input Modality checkpoints.
      23. For all markup code and all authoring interface views generated by the tool, synchronized cpations are provided for all real-time broadcasts
      24. For all markup code and all authoring interface views generated by the tool, all information presented using color is also available without color and without having to interpret markup
      25. For all markup code and all authoring interface viewsgenerated by the tool, all text that is presented over a background has a contrast great than ? between the text and the background as measured by ? or the resource provides a mechanism to allow the text to meet this criterion
      26. For all markup code and all authoring interface viewsgenerated by the tool, wherever a choice between input device event handlers is available and supported, the more abstract event is always used
      27. For all markup code and all authoring interface viewsgenerated by the tool, the author is always allowed to turn off all content that blinks for more than 3 seconds
      28. For all markup code and all authoring interface views generated by the tool, the author is always allowed to pause and/or permanently stop all moving or time-based content
      29. For all markup code and all authoring interface views generated by the tool, all content does not violate the General Flash Threshold or Red Flash Threshold
      30. For all markup code and all authoring interface views generated by the tool, all different structural elements always look or sound different from each other and from body text
      31. For all markup code and all authoring interface views generated by the tool, in all documents greater than 50,000 words or all sites larger than 50 perceived pages, at least one of the following is always provided: (a) hierarchical structure, (b) table of contents (for pages) or site map (for sites), (c) alternate display order (for pages) or alternate site map navigation mechanisms (for sites)
      32. For all markup code and all authoring interface views generated by the tool, all large blocks of material that are repeated on multiple pages, such as navigation menus with more than 8 or more links, can be always byassed by people who use screen readers or who navigate via keyboard or keyboard interface
      33. For all markup code and all authoring interface views generated by the tool, if an author error is detected, the error is always identified and provided to the author in text
      34. For all markup code and all authoring interface views generated by the tool, if an author error is detected, and suggestions for correction are known and can be provided without jeopardizing security or purpose (for example, test validity), they are always provided (in an accessible form that meets Level 1 success criteria)
      35. For all markup code and all authoring interface views generated by the tool, where consequences are significant and time-response is not important, one of the following is always true: (a) actions are reversible, (b) where not reversible, actions are checked for errors before going on to the ntext step in the process, (c) where not reversible, and not checkable, the user is able to review and confirm or correct information before submitting it
      36. For all markup code and all authoring interface views generated by the tool, page titles are always informative
      37. For all markup code and all authoring interface views generated by the tool, the meanings and pronunciations of all words in the content can always be programmatically located
      38. For all markup code and all authoring interface views generated by the tool, the meaning of all idioms in the content can always be programmatically determined
      39. For all markup code and all authoring interface views generated by the tool, for each foreign language passage or phrase in the body of the content, the language is always identified through markup or other means.
      40. For all markup code and all authoring interface views generated by the tool, all components that are repeated on multiple "pages" within a resource or a section of a resource always occur in the same sequence each time they are repeated, for at least one presentation format
      41. For all markup code and all authoring interface views generated by the tool, all user interface components should always be able to receive focus without causing activation
      42. For all markup code and all authoring interface views generated by the tool, changing the setting of any input field always should not automatically cause an extreme change in context such as leaving the "page"
      43. For all markup code and all authoring interface views generated by the tool, all interactive elements that appear on multiple "pages", including graphical elements, are always associated with the same functionality wherever they appear
      44. For all markup code and all authoring interface views generated by the tool, explicit notice is always given in advance of any exteme change of context
      45. For all markup code and all authoring interface views generated by the tool, all accessibility conventions of the markup or programming language (APIs or specific markup) are always used

  9. Success Criteria 4.1: Any mechanism that guides the author in sequencing authoring actions (e.g., design aids, wizards, templates) must integrate promting, checking, repair functions and documentation

    Test Plan:

    1. The tool designer describes all mechanisms that guide the author in sequencing authoring actions supported by the tool on a form
    2. Given this info, the author (tester) verifies that all such mechanisms successfully integrate prompting
    3. Given this info, the author (tester) verifies that all such mechanisms successfully integrate checking
    4. Given this info, the author (tester) verifies that all such mechanisms successfully integrate repair
    5. Given this info, the author (tester) verifies that all such mechanisms successfully integrate documentation

  10. Success Criteria 4.2: When an authoring action has several markup implementations (e.g., changing the color of text with presentation markup or style sheets), those markup implementations that satisfy all of the Level 2 success criteria of WCAG2.0 (06/02/04 draft) must have equal to or higher prominence of the following scales than those markup implementations that do not meet the above WCAG2.0 requirements

    Test Plan:

    1. The tool designer defines all ways supported by the tool of adding markup with a single mouse click or keystroke, and enters results on form
    2. Given this information (for each way and all markup examples), author (tester) successfully verifies each following item (or N/A as appropriate) and enters results on a form:

  11. Success Criteria 4.3:
    1. Continuously active processes (e.g. a checker that underlines errors as they occur, a checker that runs at each save, a checker that runs every 10 minutes, etc.) that implement functions required by checkpoints 3.1, 3.2, 3.3, and 3.7 must be enabled by default
    2. If the author chooses to disable these continuously active processes, then the tool must inform the author of the consequences of their choice
    3. The accessibility prompting, checking, repair and documentation must have at least the same prominence as prompting, checking, repair and documentation for other mandatory information in the tool (e.g., prompting for file names during saves or checking for and repairing spelling or syntax errors)

    Test Plan

    1. tool designer lists on form all continuously active processes implementing functions required by checkpoints 3.1, 3.2, 3.3, and 3.7 and supported by the tool, as well as how they operate
    2. author verifies on form that all such processes listed in fact work correctly as described
    3. tool designer lists on form how to disable any of previously-mentioned processes
    4. author verifies on form that for all such processes listed, author is given a choice by tool as to whether to disable using knowledge provided by tool designer, and that each choice gives consequences to the author
    5. author verifies on form that if choice is made to disable such a process (using information given previously), the process is successfully disabled
    6. author verifies on form that if choice is not made to disable such a process (using information given previously), the process is still enabled (works correctly)
    7. tool designer lists on form all accessibility prompting, checking, rapair and documentation functions and information in the tool
    8. tool designer lists on form all prompting, checking, rapair and documentation functions and information for all other mandatory information in the tool
    9. given the two items previously stated, author verifies on form that when all accessibility functions and information as mentioned in the second most recent item are accessed and attempted, the prominence in every instance is in fact the same as all functions mentioned in the most recent item before this item

  12. Success Criteria 4.X: The configurability of functions related to accessibility prompting, checking, repair, and documentation must be equivalent to that of comparable functions in terms of number of options under author control and the degree to which each option can be controlled.

    Test Plan

    1. Tool designer defines all mechanisms for accessibility prompting, checking, repair and documentation supported by the tool on a form
    2. Tool designer defines all comparable functions to those previous supported by the tool on a form, and why they are comparable
    3. Given this info, the tester (author) verifies that all functions in the accessibility list have the same number of options under author control as all items in the comparable function list, and enters the results on a form
    4. Given this info, the tester (author) verifies that all functions in the accessibility list above have the same degree to which each option can be controlled as all items in the comparable function list, and enters the results on a form

    ---------------------

  13. Success Criteria 2.5: Unless the author explicitly instructs the authoring tool otherwise, all content generated by the tool must satisfy all of the WCAG2.0 06/02/04 WD Level 2 success criteria(WCAG RELATIVE PRIORITY):

    Test Plan

    1. Tool designer enters on form how content is generated by the authoring tool.
    2. Using this info, without instructing the authoring tool otherwise, author (tester) tests all of the following (or enters N/A as appropriate) and enters results for each item on form:
      1. For all content generated by the tool, synchronized cpations are provided for all real-time broadcasts
      2. For all content generated by the tool, all information presented using color is also available without color and without having to interpret markup
      3. For all content generated by the tool, all text that is presented over a background has a contrast great than ? between the text and the background as measured by ? or the resource provides a mechanism to allow the text to meet this criterion
      4. For all content generated by the tool, wherever a choice between input device event handlers is available and supported, the more abstract event is always used
      5. For all content generated by the tool, the author is always allowed to turn off all content that blinks for more than 3 seconds
      6. For all content generated by the tool, the author is always allowed to pause and/or permanently stop all moving or time-based content
      7. For all content generated by the tool, all content does not violate the General Flash Threshold or Red Flash Threshold
      8. For all content generated by the tool, all different structural elements always look or sound different from each other and from body text
      9. For all content generated by the tool, in all documents greater than 50,000 words or all sites larger than 50 perceived pages, at least one of the following is always provided: (a) hierarchical structure, (b) table of contents (for pages) or site map (for sites), (c) alternate display order (for pages) or alternate site map navigation mechanisms (for sites)
      10. For all content generated by the tool, all large blocks of material that are repeated on multiple pages, such as navigation menus with more than 8 or more links, can be always byassed by people who use screen readers or who navigate via keyboard or keyboard interface
      11. For all content generated by the tool, if an author error is detected, the error is always identified and provided to the author in text
      12. For all content generated by the tool, if an author error is detected, and suggestions for correction are known and can be provided without jeopardizing security or purpose (for example, test validity), they are always provided (in an accessible form that meets Level 1 success criteria)
      13. For all content generated by the tool, where consequences are significant and time-response is not important, one of the following is always true: (a) actions are reversible, (b) where not reversible, actions are checked for errors before going on to the ntext step in the process, (c) where not reversible, and not checkable, the user is able to review and confirm or correct information before submitting it
      14. For all content generated by the tool, page titles are always informative
      15. For all content generated by the tool, the meanings and pronunciations of all words in the content can always be programmatically located
      16. For all content generated by the tool, the meaning of all idioms in the content can always be programmatically determined
      17. For all content generated by the tool, for each foreign language passage or phrase in the body of the content, the language is always identified through markup or other means.
      18. For all content generated by the tool, all components that are repeated on multiple "pages" within a resource or a section of a resource always occur in the same sequence each time they are repeated, for at least one presentation format
      19. For all content generated by the tool, all user interface components should always be able to receive focus without causing activation
      20. For all content generated by the tool, changing the setting of any input field always should not automatically cause an extreme change in context such as leaving the "page"
      21. For all content generated by the tool, all interactive elements that appear on multiple "pages", including graphical elements, are always associated with the same functionality wherever they appear
      22. For all content generated by the tool, explicit notice is always given in advance of any exteme change of context
      23. For all content generated by the tool, all accessibility conventions of the markup or programming language (APIs or specific markup) are always used
    3. Author explicitly instructs the authoring tool not to produce WCAG-level 2 comformant content, and determines if tool complies?

  14. Success Criteria 2.6: Any web content (e.g., templates, clip art, multimedia objects, scripts, applets, example pages, etc.) that is bundled or preferentially licensed (i.e., better terms for users of the authoring tool than for the general public) must satisfy the Level 2 WCAG2.0 (06/02/04 draft) success criteria (WCAG RELATIVE PRIORITY):

    Test Plan

    1. Tool designer defines all content bundled or preferentially licensed (referred to hereafter as "content") for the tool (and what preferential licensing means) on a form
    2. Using this info, author (tester) tests all of the following and enters result for each (or N/A as appropriate) on a form:
      1. For all content generated by the tool, synchronized cpations are provided for all real-time broadcasts
      2. For all content generated by the tool, all information presented using color is also available without color and without having to interpret markup
      3. For all content generated by the tool, all text that is presented over a background has a contrast great than ? between the text and the background as measured by ? or the resource provides a mechanism to allow the text to meet this criterion
      4. For all content generated by the tool, wherever a choice between input device event handlers is available and supported, the more abstract event is always used
      5. For all content generated by the tool, the author is always allowed to turn off all content that blinks for more than 3 seconds
      6. For all content generated by the tool, the author is always allowed to pause and/or permanently stop all moving or time-based content
      7. For all content generated by the tool, all content does not violate the General Flash Threshold or Red Flash Threshold
      8. For all content generated by the tool, all different structural elements always look or sound different from each other and from body text
      9. For all content generated by the tool, in all documents greater than 50,000 words or all sites larger than 50 perceived pages, at least one of the following is always provided: (a) hierarchical structure, (b) table of contents (for pages) or site map (for sites), (c) alternate display order (for pages) or alternate site map navigation mechanisms (for sites)
      10. For all content generated by the tool, all large blocks of material that are repeated on multiple pages, such as navigation menus with more than 8 or more links, can be always byassed by people who use screen readers or who navigate via keyboard or keyboard interface
      11. For all content generated by the tool, if an author error is detected, the error is always identified and provided to the author in text
      12. For all content generated by the tool, if an author error is detected, and suggestions for correction are known and can be provided without jeopardizing security or purpose (for example, test validity), they are always provided (in an accessible form that meets Level 1 success criteria)
      13. For all content generated by the tool, where consequences are significant and time-response is not important, one of the following is always true: (a) actions are reversible, (b) where not reversible, actions are checked for errors before going on to the ntext step in the process, (c) where not reversible, and not checkable, the user is able to review and confirm or correct information before submitting it
      14. For all content generated by the tool, page titles are always informative
      15. For all content generated by the tool, the meanings and pronunciations of all words in the content can always be programmatically located
      16. For all content generated by the tool, the meaning of all idioms in the content can always be programmatically determined
      17. For all content generated by the tool, for each foreign language passage or phrase in the body of the content, the language is always identified through markup or other means.
      18. For all content generated by the tool, all components that are repeated on multiple "pages" within a resource or a section of a resource always occur in the same sequence each time they are repeated, for at least one presentation format
      19. For all content generated by the tool, all user interface components should always be able to receive focus without causing activation
      20. For all content generated by the tool, changing the setting of any input field always should not automatically cause an extreme change in context such as leaving the "page"
      21. For all content generated by the tool, all interactive elements that appear on multiple "pages", including graphical elements, are always associated with the same functionality wherever they appear
      22. For all content generated by the tool, explicit notice is always given in advance of any exteme change of context
      23. For all content generated by the tool, all accessibility conventions of the markup or programming language (APIs or specific markup) are always used

  15. Success Criteria 3.1:
    1. When the actions of the author risk creating accessibility problems (not satisfying any of the WCAG2.0 (06/02/04) Level 2 success criteria), the tool must introduce the appropriate accessible authoring practice.
    2. The intervention must occur at least once before completion of authoring (e.g., final save, publishing, etc.)
    (WCAG RELATIVE PRIORITY):

    Test Plan

    1. tool designer describes all intervention features of tool and accessibility issues prompting such intervention on a form
    2. tool designer describes how a user could configure the schedule of intervention
    3. Using this info, author tests all of the following and enters results on a form (or enters N/A as appropriate for each item) (NOTE: x is a small number that is assumed always to occur before completion of authoring):
      1. For all content generated by the tool, if synchronized caations are not always provided for all real-time broadcasts, the tool always intervenes within x seconds of the event
      2. For all content generated by the tool, if all information presented using color is not also available without color and without having to interpret markup, the tool always intervenes within x seconds of the event
      3. For all content generated by the tool, if all text that is presented over a background does not have a contrast greater than ? between the text and the background as measured by ? or the resource does not provide a mechanism to allow the text to meet this criterion, the tool always intervenes within x seconds of the event
      4. For all content generated by the tool, if wherever a choice between input device event handlers is available and supported, the more abstract event is not always used, the tool always intervenes within x seconds of the event
      5. For all content generated by the tool, if the author is not always allowed to turn off all content that blinks for more than 3 seconds, the tool always intervenes within x seconds of the event
      6. For all content generated by the tool, if the author is not always allowed to pause and/or permanently stop all moving or time-based content, the tool always intervenes within x seconds of the event
      7. For all content generated by the tool, if content sometimes violates the General Flash Threshold or Red Flash Threshold, the tool always intervenes within x seconds of the event
      8. For all content generated by the tool, if different structural elements do not always look or sound different from each other and from body text, the tool always intervenes within x seconds of the event
      9. For all content generated by the tool, if in all documents greater than 50,000 words or all sites larger than 50 perceived pages, at least one of the following is not always provided: (a) hierarchical structure, (b) table of contents (for pages) or site map (for sites), (c) alternate display order (for pages) or alternate site map navigation mechanisms (for sites), the tool always intervenes within x seconds of the event
      10. For all content generated by the tool, if large blocks of material that are repeated on multiple pages, such as navigation menus with more than 8 or more links, can not always be byassed by people who use screen readers or who navigate via keyboard or keyboard interface, the tool always intervenes within x seconds of the event
      11. For all content generated by the tool, if an author error is detected, the error is not always identified and provided to the author in text, the tool always intervenes within x seconds of the event
      12. For all content generated by the tool, if an author error is detected, and suggestions for correction are known and can be provided without jeopardizing security or purpose (for example, test validity), they are not always provided (in an accessible form that meets Level 1 WCAG success criteria), the tool always intervenes within x seconds of the event
      13. For all content generated by the tool, if where consequences are significant and time-response is not important, one of the following is not always true: (a) actions are reversible, (b) where not reversible, actions are checked for errors before going on to the ntext step in the process, (c) where not reversible, and not checkable, the user is able to review and confirm or correct information before submitting it, the tool always intervenes within x seconds of the event
      14. For all content generated by the tool, if page titles are not always informative, the tool always intervenes within x seconds of the event
      15. For all content generated by the tool, if the meanings and pronunciations of all words in the content can not always be programmatically located, the tool always intervenes within x seconds of the event
      16. For all content generated by the tool, if the meaning of all idioms in the content can not always be programmatically determined, the tool always intervenes within x seconds of the event
      17. For all content generated by the tool, if for each foreign language passage or phrase in the body of the content, the language is not always identified through markup or other means, the tool always intervenes within x seconds of the event
      18. For all content generated by the tool, if all components that are repeated on multiple "pages" within a resource or a section of a resource do not always occur in the same sequence each time they are repeated, for at least one presentation format, the tool always intervenes within x seconds of the event
      19. For all content generated by the tool, if all user interface components are not always be able to receive focus without causing activation, the tool always intervenes within x seconds of the event
      20. For all content generated by the tool, if changing the setting of any input field sometimes automatically causes an extreme change in context such as leaving the "page", the tool always intervenes within x seconds of the event
      21. For all content generated by the tool, if interactive elements that appear on multiple "pages", including graphical elements, are not always associated with the same functionality wherever they appear, the tool always intervenes within x seconds of the event
      22. For all content generated by the tool, if explicit notice is not always given in advance of any exteme change of context, the tool always intervenes within x seconds of the event
      23. For all content generated by the tool, if accessibility conventions of the markup or programming language (APIs or specific markup) are not always used, the tool always intervenes within x seconds of the event

  16. Success Criteria 3.2: The tool must provide a check (automated check, semi-automated check or manual check) for detecting violations of each Level 2 success criteria of WCAG2.0 (06/02/04 draft)(WCAG RELATIVE PRIORITY):

    Test Plan

    1. tool designer describes on form all checks supported by the tool for violation detection, what kind of check each is, and how authors would recognize or use info provided by each check
    2. given this info, author tests each following item and enters results on a form (or N/A for items as appropriate):
      1. For all content generated by the tool, if synchronized captions are not always provided for all real-time broadcasts, the tool always provides a check for detection
      2. For all content generated by the tool, if all information presented using color is not also available without color and without having to interpret markup, the tool always provides a check for detection
      3. For all content generated by the tool, if all text that is presented over a background does not have a contrast greater than ? between the text and the background as measured by ? or the resource does not provide a mechanism to allow the text to meet this criterion, the tool always provides a check for detection
      4. For all content generated by the tool, if wherever a choice between input device event handlers is available and supported, the more abstract event is not always used, the tool always provides a check for detection
      5. For all content generated by the tool, if the author is not always allowed to turn off all content that blinks for more than 3 seconds, the tool always provides a check for detection
      6. For all content generated by the tool, if the author is not always allowed to pause and/or permanently stop all moving or time-based content, the tool always provides a check for detection
      7. For all content generated by the tool, if content sometimes violates the General Flash Threshold or Red Flash Threshold, the tool always provides a check for detection
      8. For all content generated by the tool, if different structural elements do not always look or sound different from each other and from body text, the tool always provides a check for detection
      9. For all content generated by the tool, if in all documents greater than 50,000 words or all sites larger than 50 perceived pages, at least one of the following is not always provided: (a) hierarchical structure, (b) table of contents (for pages) or site map (for sites), (c) alternate display order (for pages) or alternate site map navigation mechanisms (for sites), the tool always intervenes provides a check for detection
      10. For all content generated by the tool, if large blocks of material that are repeated on multiple pages, such as navigation menus with more than 8 or more links, can not always be byassed by people who use screen readers or who navigate via keyboard or keyboard interface, the tool always provides a check for detection
      11. For all content generated by the tool, if an author error is detected, the error is not always identified and provided to the author in text, the tool always provides a check for detection
      12. For all content generated by the tool, if an author error is detected, and suggestions for correction are known and can be provided without jeopardizing security or purpose (for example, test validity), they are not always provided (in an accessible form that meets Level 1 WCAG success criteria), the tool always provides a check for detection
      13. For all content generated by the tool, if where consequences are significant and time-response is not important, one of the following is not always true: (a) actions are reversible, (b) where not reversible, actions are checked for errors before going on to the ntext step in the process, (c) where not reversible, and not checkable, the user is able to review and confirm or correct information before submitting it, the tool always provides a check for detection
      14. For all content generated by the tool, if page titles are not always informative, the tool always provides a check for detection
      15. For all content generated by the tool, if the meanings and pronunciations of all words in the content can not always be programmatically located, the tool always provides a check for detection
      16. For all content generated by the tool, if the meaning of all idioms in the content can not always be programmatically determined, the tool always provides a check for detection
      17. For all content generated by the tool, if for each foreign language passage or phrase in the body of the content, the language is not always identified through markup or other means, the tool always provides a check for detection
      18. For all content generated by the tool, if all components that are repeated on multiple "pages" within a resource or a section of a resource do not always occur in the same sequence each time they are repeated, for at least one presentation format, the tool always provides a check for detection
      19. For all content generated by the tool, if all user interface components are not always be able to receive focus without causing activation, the tool always provides a check for detection
      20. For all content generated by the tool, if changing the setting of any input field sometimes automatically causes an extreme change in context such as leaving the "page", the tool always provides a check for detection
      21. For all content generated by the tool, if interactive elements that appear on multiple "pages", including graphical elements, are not always associated with the same functionality wherever they appear, the tool always provides a check for detection
      22. For all content generated by the tool, if explicit notice is not always given in advance of any exteme change of context, the tool always provides a check for detection
      23. For all content generated by the tool, if accessibility conventions of the markup or programming language (APIs or specific markup) are not always used, the tool always provides a check for detection

  17. Success Criteria 3.3: The tool must provide a repair (automated repair, semi-automated repair or manual repair) for correcting violations of each Level 2 success criteria of WCAG2.0 (06/02/04 draft)(WCAG RELATIVE PRIORITY):

    Test Plan

    1. tool designer describes on form all repairs provided for violation correction by the tool, and what kind of repair it is
    2. given this info, author tests each item below to ensure repair successfully made (or enters N/A for item as appropriate), and enters results on form:
      1. For all content generated by the tool, if synchronized captions are not always provided for all real-time broadcasts, the tool always provides a repair for correction
      2. For all content generated by the tool, if all information presented using color is not also available without color and without having to interpret markup, the tool always provides a repair for correction
      3. For all content generated by the tool, if all text that is presented over a background does not have a contrast greater than ? between the text and the background as measured by ? or the resource does not provide a mechanism to allow the text to meet this criterion, the tool always provides a repair for correction
      4. For all content generated by the tool, if wherever a choice between input device event handlers is available and supported, the more abstract event is not always used, the tool always provides a repair for correction
      5. For all content generated by the tool, if the author is not always allowed to turn off all content that blinks for more than 3 seconds, the tool always provides a repair for correction
      6. For all content generated by the tool, if the author is not always allowed to pause and/or permanently stop all moving or time-based content, the tool always provides a repair for correction
      7. For all content generated by the tool, if content sometimes violates the General Flash Threshold or Red Flash Threshold, the tool always provides a repair for correction
      8. For all content generated by the tool, if different structural elements do not always look or sound different from each other and from body text, the tool always provides a repair for correction
      9. For all content generated by the tool, if in all documents greater than 50,000 words or all sites larger than 50 perceived pages, at least one of the following is not always provided: (a) hierarchical structure, (b) table of contents (for pages) or site map (for sites), (c) alternate display order (for pages) or alternate site map navigation mechanisms (for sites), the tool always intervenes within x seconds of the event
      10. For all content generated by the tool, if large blocks of material that are repeated on multiple pages, such as navigation menus with more than 8 or more links, can not always be byassed by people who use screen readers or who navigate via keyboard or keyboard interface, the tool always provides a repair for correction
      11. For all content generated by the tool, if an author error is detected, the error is not always identified and provided to the author in text, the tool always provides a repair for correction
      12. For all content generated by the tool, if an author error is detected, and suggestions for correction are known and can be provided without jeopardizing security or purpose (for example, test validity), they are not always provided (in an accessible form that meets Level 1 WCAG success criteria), the tool always provides a repair for correction
      13. For all content generated by the tool, if where consequences are significant and time-response is not important, one of the following is not always true: (a) actions are reversible, (b) where not reversible, actions are checked for errors before going on to the ntext step in the process, (c) where not reversible, and not checkable, the user is able to review and confirm or correct information before submitting it, the tool always provides a repair for correction
      14. For all content generated by the tool, if page titles are not always informative, the tool always provides a repair for correction
      15. For all content generated by the tool, if the meanings and pronunciations of all words in the content can not always be programmatically located, the tool always provides a repair for correction
      16. For all content generated by the tool, if the meaning of all idioms in the content can not always be programmatically determined, the tool always provides a repair for correction
      17. For all content generated by the tool, if for each foreign language passage or phrase in the body of the content, the language is not always identified through markup or other means, the tool always provides a repair for correction
      18. For all content generated by the tool, if all components that are repeated on multiple "pages" within a resource or a section of a resource do not always occur in the same sequence each time they are repeated, for at least one presentation format, the tool always provides a repair for correction
      19. For all content generated by the tool, if all user interface components are not always be able to receive focus without causing activation, the tool always provides a repair for correction
      20. For all content generated by the tool, if changing the setting of any input field sometimes automatically causes an extreme change in context such as leaving the "page", the tool always provides a repair for correction
      21. For all content generated by the tool, if interactive elements that appear on multiple "pages", including graphical elements, are not always associated with the same functionality wherever they appear, the tool always provides a repair for correction
      22. For all content generated by the tool, if explicit notice is not always given in advance of any exteme change of context, the tool always provides a repair for correction
      23. For all content generated by the tool, if accessibility conventions of the markup or programming language (APIs or specific markup) are not always used, the tool always provides a repair for correction

    ----------------------------------------------------------------------------------------------------

    Conformance Level AAA (after doing A and AA)

    1. Success Criteria 1.1: The authoring interface must conform to ISO16071 level 3 (ISO16071 RELATIVE PRIORITY)

      Test Plan

      1. tool designer lists capabilities of authoring tool interface on form (and other documentation necessary for author to use)
      2. author lists ISO16071 part 7 secondary-application capabilities for testing on form
      3. author lists secondary-application elements of ISO16071 Part 7 on form
      4. author lists which secondary-application elements of ISO16071 Part 7 are passed by the authoring interface on form

    2. Success Criteria 1.2: At least one editing method must conform to ISO16071 level 3 for each element and object property editable by the tool (ISO16071 RELATIVE PRIORITY)

      Test Plan

      1. tool designer lists on form all editing methods to be considered (available) for the tool
      2. tool designer lists on form all elements editable by authoring tool
      3. tool designer lists on form all object properties editable by authoring tool
      4. author lists on form the ISO16071 secondary-application Part 7 testing criteria
      5. author tries an previously-listed editing method involving before-referenced elements and object properties against ISO16071 secondary-application Part 7 testing criteria and lists on form which ISO16071 secondary-application Part 7 criteria are passed as well as the editing method used and the elements and object properties edited

    3. Success Criteria 3.5: When objects, for which alternative equivalents have been previously provided, are inserted, the tool must always offer those alternative equivalents for reuse or modification

      Test Plan

      1. tool designer describes how objects can be inserted using the tool on a form
      2. tool designer defines how the tool supports provision of previously authored alternative equivalents (and how the tool associates these with specific objects) on a form
      3. tool designer defines how the tool offers these alternative equivalents on a form
      4. Given the previous information, author (tester) verifies that in fact the tool always successfully offers these previously stated alternative equivalents for reuse, on a form
      5. Given the previous information, author (tester) verifies that in fact the tool always successfully offers these previously stated alternative equivalents for modification, on a form

    4. Success Criteria 3.6: The tool must provide the author with an option to view a listing of all current accessibility problems.

      Test Plan

      1. The tool designer defines how the tool detects accessibility problems, what kind of accessibility problems are detected, and how they are reported, on a form
      2. The tool designer describes how a listing of accessibility problems is presented to the author, on a form
      3. Given the previous info, the tester (author) verifies that the tool successfully informs the author of an option to list all of known accessibility problems; results are entered on a form
      4. Given the previous info, if the author accepts the option, the author verifies that the entire list is made available to the author every time; results are entered on the form
      5. Given the previous info, the author verifies that every entry in the list in fact is an accessibility problem (and links back to the actual problem); results are entered on a form
      6. Given the previous info, if the author refuses the option, the author verifies that the list is not made available to the author; results are entered on a form

    5. Success Criteria 4.4: The authoring interface for accessibility prompting, checking, repair and documentation must be equivalent to the authroing interface for comparable functions in terms of the following characteristics:
      1. design (measured by design metaphors, artistic sophistication, sizes, fonts, colors)
      2. operation (measured by degree of automation, number of actions for activation)
      3. comprehensiveness (measured by breadth and depth of functionality coverage)

      Test Plan

      1. Tool designer defines all mechanisms for accessibility prompting, checking, repair and documentation supported by the authoring interface on a form
      2. Tool designer defines all comparable functions to those previous supported by the authoring interface on a form, and why they are comparable
      3. Given this info, the tester (author) verifies that all functions in the accessibility list have the same design measurements as all items in the comparable function list, and enters the results on a form
      4. Given this info, the tester (author) verifies that all functions in the top list above have the same operation measurements as all items in the comparable function list, and enters the results on a form
      5. Given this info, the tester (author) verifies that all functions in the top list above have the same comprehensiveness measurements as all items in the comparable mechanism list, and enters the results on a form

      ------------------

    6. Success Criteria 2.5: Unless the author explicitly instructs the authoring tool otherwise, all content generated by the tool must conform to the Level 3 success criteria of WCAG2 (06/02/04 draft)(WCAG RELATIVE PRIORITY):

      Test Plan

      1. tool designer explains how content is generated by the tool (on form)
      2. Given this information, author (tester) enters on form which of following items are satisfied :
        1. Tester verifies that a text document (for example, a movie script) is always provided that includes all important visual information, dialogue, and other important sounds
        2. Tester verifies that in default presentation mode, text that is presented over a background always has a contrast great than between the text and background as measured by ? or the resource always provides a mechanism to allow the text to meet this criterion
        3. Tester verifies that text is not presented over a background image or pattern, or if a background image or pattern is present, the text is easily readable when the content is viewed in grayscale to determine if the background makes it difficult to identify individual characters
        4. Tester verifies that all of the functionality of the content is operable via a keyboard or keyboard interface
        5. Tester verifies that the content has been designed in a way that any time limits in the content would pass level 1, success criteria 1 fo this guidelines without exceptions
        6. Tester verifies that any non-emergency interruptions, such as the availability of updated content, can be postponed and/or suppressed by the user
        7. Tester verifies that content does not violate any of the Spatial Pattern Thresholds
        8. Tester verifies that information is provided that would indicate at least one logical sequence in which to read a document
        9. Tester verifies that diagrams are constructed so that they have structure that users can access
        10. Tester verifies that logical tab order has been created
        11. Tester verifies that there is a statement associated with the content asserting that items from the following list were considered:
          1. breaking up text into logical paragraphs
          2. dividing documents, especially very long ones, into hierarchical sections and subsections with clear and informative titles
          3. supplying an infomative title for each page or resource that can be accessed independently
          4. supplying a unique title for each page or resource that can be accessed independently
          5. revealing important non-hierarchical relationships, such as cross-references so that the relationships are represented unambiguously in the markup or data model
        12. Tester verifies that structural emphasis is evident on at least the following displays: (a) black and white monitor, (b) low resolution screens, (c) "mono" audio playback devices
        13. Tester verifies that where the input options are known, there are less than 75 of them, and they can be provided without jeopardizing security, test validity, etc., then users are allowed to select from a list of options as well as to enter text directly
        14. Tester verifies that checks for misspelled words are applied and correct spellings are suggested when text entry is required
        15. Tester verifies that the meaning of contracted words can be programmatically determined
        16. Tester verifies that where a word has multiple meanings and the intended meaning is not the first in the associated disctionariy(s), then additional markup or another mechniasm is provided for determining the correct meaning
        17. Tester verifies that section headings and link text are understandabale when read by themselves as a group (for example, in a screen reader's list of links or a table of contents)
        18. Tester verifies that there is a statemen associated with the content asserting that the Strategies for Reducing the Complexity of Content were considered
        19. Tester verifies that the target of each link is clearly identified
        20. Tester verifies that graphical components that appear on multiple pages, including graphical links, are associated with the same text equivalents wherever they appear
        21. Tester verifies that components that appear visually on multiple pages, such as navigation bars, search forms, adn sections within the main content, are displayed in the same location relative to other content on every page or screen where they appear
        22. Tester verifies that when components such as navigation menus and search forms appear on multiple pages, users can choose to have those elements presented in a different visual position or reading-order
        23. Tester verifies that there are no extreme changes of context
        24. Tester verifies that technologies are used according to specification without exception
        25. Tester verifies that the Web resource includes a list of the technologies user agents must support in order for tis content to work as intended. (The list is documented in metadata is such metadata is supported by the format; otherwise it is documented in a policy statement associated with the content)
        26. Tester verifies that users who do not have one or more of these technologies can still access and use the resource, though the experience may be degraded
        27. Tester verifies that technoloiges and features on the required list are open standards or have a public specification
      3. Author explicitly instructs the authoring tool not to produce WCAG-level 3 comformant content, and determines if tool complies?

    7. Success Criteria 2.6: Any web content (e.g., templates, clip art, multimedia objects, scripts, applets, example pages, etc.) that is bundled or preferentially licensed (i.e., better terms for users of the authoring tool than for the general public) must satisfy the Level 3 success criteria of WCAG2.0 (06/02/04 draft) (WCAG RELATIVE PRIORITY):

      Test Plan

      1. tool designer defines all kinds of web content which are preferentially licensed associated with tool (on form)
      2. tool designer defines what preferential licensing means in context of the tool (on form)
      3. Given this info, author (tester) verifies at least one of following items, and enters results on form:
        1. Tester verifies that a text document (for example, a movie script) is always provided that includes all important visual information, dialogue, and other important sounds
        2. Tester verifies that in default presentation mode, text that is presented over a background always has a contrast great than between the text and background as measured by ? or the resource always provides a mechanism to allow the text to meet this criterion
        3. Tester verifies that text is not presented over a background image or pattern, or if a background image or pattern is present, the text is easily readable when the content is viewed in grayscale to determine if the background makes it difficult to identify individual characters
        4. Tester verifies that all of the functionality of the content is operable via a keyboard or keyboard interface
        5. Tester verifies that the content has been designed in a way that any time limits in the content would pass level 1, success criteria 1 fo this guidelines without exceptions
        6. Tester verifies that any non-emergency interruptions, such as the availability of updated content, can be postponed and/or suppressed by the user
        7. Tester verifies that content does not violate any of the Spatial Pattern Thresholds
        8. Tester verifies that information is provided that would indicate at least one logical sequence in which to read a document
        9. Tester verifies that diagrams are constructed so that they have structure that users can access
        10. Tester verifies that logical tab order has been created
        11. Tester verifies that there is a statement associated with the content asserting that items from the following list were considered:
          1. breaking up text into logical paragraphs
          2. dividing documents, especially very long ones, into hierarchical sections and subsections with clear and informative titles
          3. supplying an infomative title for each page or resource that can be accessed independently
          4. supplying a unique title for each page or resource that can be accessed independently
          5. revealing important non-hierarchical relationships, such as cross-references so that the relationships are represented unambiguously in the markup or data model
        12. Tester verifies that structural emphasis is evident on at least the following displays: (a) black and white monitor, (b) low resolution screens, (c) "mono" audio playback devices
        13. Tester verifies that where the input options are known, there are less than 75 of them, and they can be provided without jeopardizing security, test validity, etc., then users are allowed to select from a list of options as well as to enter text directly
        14. Tester verifies that checks for misspelled words are applied and correct spellings are suggested when text entry is required
        15. Tester verifies that the meaning of contracted words can be programmatically determined
        16. Tester verifies that where a word has multiple meanings and the intended meaning is not the first in the associated disctionariy(s), then additional markup or another mechniasm is provided for determining the correct meaning
        17. Tester verifies that section headings and link text are understandabale when read by themselves as a group (for example, in a screen reader's list of links or a table of contents)
        18. Tester verifies that there is a statement associated with the content asserting that the Strategies for Reducing the Complexity of Content were considered
        19. Tester verifies that the target of each link is clearly identified
        20. Tester verifies that graphical components that appear on multiple pages, including graphical links, are associated with the same text equivalents wherever they appear
        21. Tester verifies that components that appear visually on multiple pages, such as navigation bars, search forms, adn sections within the main content, are displayed in the same location relative to other content on every page or screen where they appear
        22. Tester verifies that when components such as navigation menus and search forms appear on multiple pages, users can choose to have those elements presented in a different visual position or reading-order
        23. Tester verifies that there are no extreme changes of context
        24. Tester verifies that technologies are used according to specification without exception
        25. Tester verifies that the Web resource includes a list of the technologies user agents must support in order for tis content to work as intended. (The list is documented in metadata is such metadata is supported by the format; otherwise it is documented in a policy statement associated with the content)
        26. Tester verifies that users who do not have one or more of these technologies can still access and use the resource, though the experience may be degraded
        27. Tester verifies that technoloiges and features on the required list are open standards or have a public specification

    8. Success Criteria 3.1:
      1. When the actions of the author risk creating accessibility problems according to the Level 3 success criteria of WCAG2.0 (06/02/04 draft) the tool must introduce the appropriate accessible authoring practice.
      2. The intervention must occur at least once before ocmpletion of authoring (e.g., final save, publishing, etc.)
      (WCAG RELATIVE PRIORITY):

      Test Plan

      1. tool designer describes kinds of accessibility problems that can be detected using tool (on form)
      2. tool designer describes how tool intervenes and notifies author (on form)
      3. tool designer describes how tool introduces appropriate accessible authoring practice (on form)
      4. Given this info, author (tester) verifies each following item (or N/A as appropriate) and enters results on form (x is a small number, and is assumed to be before completion of authoring):
        1. Tester verifies that if a text document (for example, a movie script) is not always provided that includes all important visual information, dialogue, and other important sounds, the tool always intervenes within x seconds of the determination
        2. Tester verifies that if in default presentation mode, text that is presented over a background does not always have a contrast greater than between the text and background as measured by ? or the resource does not always provide a mechanism to allow the text to meet this criterion, the tool always intervenes within x seconds of the determination
        3. Tester verifies that if text is presented over a background image or pattern, or if a background image or pattern is present, the text not is easily readable when the content is viewed in grayscale to determine if the background makes it difficult to identify individual characters, the tool always intervenes within x seconds of the determination
        4. Tester verifies that if not all of the functionality of the content is operable via a keyboard or keyboard interface, the tool always intervenes within x seconds of the determination
        5. Tester verifies that if the content has been designed in a way that any time limits in the content would not pass level 1, success criteria 1 fo this guidelines without exceptions , the tool always intervenes within x seconds of the determination
        6. Tester verifies that if any non-emergency interruptions, such as the availability of updated content, can not be postponed and/or suppressed by the user, the tool always intervenes within x seconds of the determination
        7. Tester verifies that content violates some of the Spatial Pattern Thresholds, the tool always intervenes within x seconds of the determination
        8. Tester verifies that if information is not provided that would indicate at least one logical sequence in which to read a document, the tool always intervenes within x seconds of the determination
        9. Tester verifies that if diagrams are not constructed so that they have structure that users can access, the tool always intervenes within x seconds of the determination
        10. Tester verifies that if logical tab order has not been created, the tool always intervenes within x seconds of the determination
        11. Tester verifies that if there is not a statement associated with the content asserting that items from the following list were considered :
          1. breaking up text into logical paragraphs
          2. dividing documents, especially very long ones, into hierarchical sections and subsections with clear and informative titles
          3. supplying an infomative title for each page or resource that can be accessed independently
          4. supplying a unique title for each page or resource that can be accessed independently
          5. revealing important non-hierarchical relationships, such as cross-references so that the relationships are represented unambiguously in the markup or data model
          the tool always intervenes within x seconds of the determination
        12. Tester verifies that structural emphasis is not evident on at least the following displays: (a) black and white monitor, (b) low resolution screens, (c) "mono" audio playback devices, the tool always intervenes within x seconds of the determination
        13. Tester verifies that if where the input options are known, there are less than 75 of them, and they can be provided without jeopardizing security, test validity, etc., then users are not allowed to select from a list of options as well as to enter text directly , the tool always intervenes within x seconds of the determination
        14. Tester verifies that if checks for misspelled words are applied and correct spellings are suggested when text entry is required, the tool always intervenes within x seconds of the determination
        15. Tester verifies that if the meaning of contracted words can not be programmatically determined, the tool always intervenes within x seconds of the determination
        16. Tester verifies that if where a word has multiple meanings and the intended meaning is not the first in the associated disctionariy(s), then additional markup or another mechniasm is not provided for determining the correct meaning, the tool always intervenes within x seconds of the determination
        17. Tester verifies that if section headings and link text are not understandabale when read by themselves as a group (for example, in a screen reader's list of links or a table of contents), the tool always intervenes within x seconds of the determination
        18. Tester verifies that if there is not a statement associated with the content asserting that the Strategies for Reducing the Complexity of Content were considered, the tool always intervenes within x seconds of the determination
        19. Tester verifies that if the target of each link is not clearly identified, the tool always intervenes within x seconds of the determination
        20. Tester verifies that if graphical components that appear on multiple pages, including graphical links, are not associated with the same text equivalents wherever they appear, the tool always intervenes within x seconds of the determination
        21. Tester verifies that if components that appear visually on multiple pages, such as navigation bars, search forms, adn sections within the main content, are not displayed in the same location relative to other content on every page or screen where they appear, the tool always intervenes within x seconds of the determination
        22. Tester verifies that if when components such as navigation menus and search forms appear on multiple pages, users can not choose to have those elements presented in a different visual position or reading-order, the tool always intervenes within x seconds of the determination
        23. Tester verifies that if there are extreme changes of context, the tool always intervenes within x seconds of the determination
        24. Tester verifies that if technologies are not used according to specification without exception, the tool always intervenes within x seconds of the determination
        25. Tester verifies that if the Web resource does not include a list of the technologies user agents must support in order for tis content to work as intended (The list is documented in metadata is such metadata is supported by the format; otherwise it is documented in a policy statement associated with the content), the tool always intervenes within x seconds of the determination
        26. Tester verifies that if users who do not have one or more of these technologies can not access and use the resource, though the experience may be degraded, the tool always intervenes within x seconds of the determination
        27. Tester verifies that if technoloiges and features on the required list are not open standards or have a public specification , the tool always intervenes within x seconds of the determination

    9. Success Criteria 3.2: The tool must provide a check (automated check, semi-automated check or manual check) for detecting violations of each Level 3 success criteria of WCAG2.0 (06/02/04 draft)(WCAG RELATIVE PRIORITY):

      Test Plan

      1. The tool designer defines how checks (for detecting accessibility violations) are provided by the tool on a form
      2. The tool designer describes each check (for detecting accessibility violations), and what kind it is, on a form
      3. Given this info, the author (tester) verifies each following item (or enters N/A as appropriate) on a form:
        1. Tester verifies that if a text document (for example, a movie script) is not always provided that includes all important visual information, dialogue, and other important sounds, the tool always provides a check for detection
        2. Tester verifies that if in default presentation mode, text that is presented over a background does not always have a contrast greater than between the text and background as measured by ? or the resource does not always provide a mechanism to allow the text to meet this criterion, the tool always provides a check for detection
        3. Tester verifies that if text is presented over a background image or pattern, or if a background image or pattern is present, the text not is easily readable when the content is viewed in grayscale to determine if the background makes it difficult to identify individual characters, the tool always provides a check for detection
        4. Tester verifies that if not all of the functionality of the content is operable via a keyboard or keyboard interface, the tool always provides a check for detection
        5. Tester verifies that if the content has been designed in a way that any time limits in the content would not pass level 1, success criteria 1 fo this guidelines without exceptions , the tool always provides a check for detection
        6. Tester verifies that if any non-emergency interruptions, such as the availability of updated content, can not be postponed and/or suppressed by the user, the tool always provides a check for detection
        7. Tester verifies that content violates some of the Spatial Pattern Thresholds, the tool always provides a check for detection
        8. Tester verifies that if information is not provided that would indicate at least one logical sequence in which to read a document, the tool always provides a check for detection
        9. Tester verifies that if diagrams are not constructed so that they have structure that users can access, the tool always provides a check for detection
        10. Tester verifies that if logical tab order has not been created, the tool always provides a check for detection
        11. Tester verifies that if there is not a statement associated with the content asserting that items from the following list were considered :
          1. breaking up text into logical paragraphs
          2. dividing documents, especially very long ones, into hierarchical sections and subsections with clear and informative titles
          3. supplying an infomative title for each page or resource that can be accessed independently
          4. supplying a unique title for each page or resource that can be accessed independently
          5. revealing important non-hierarchical relationships, such as cross-references so that the relationships are represented unambiguously in the markup or data model
          the tool always provides a check for detection
        12. Tester verifies that structural emphasis is not evident on at least the following displays: (a) black and white monitor, (b) low resolution screens, (c) "mono" audio playback devices, the tool always provides a check for detection
        13. Tester verifies that if where the input options are known, there are less than 75 of them, and they can be provided without jeopardizing security, test validity, etc., then users are not allowed to select from a list of options as well as to enter text directly , the tool always provides a check for detection
        14. Tester verifies that if checks for misspelled words are applied and correct spellings are suggested when text entry is required, the tool always provides a check for detection
        15. Tester verifies that if the meaning of contracted words can not be programmatically determined, the tool always provides a check for detection
        16. Tester verifies that if where a word has multiple meanings and the intended meaning is not the first in the associated disctionariy(s), then additional markup or another mechniasm is not provided for determining the correct meaning, the tool always provides a check for detection
        17. Tester verifies that if section headings and link text are not understandabale when read by themselves as a group (for example, in a screen reader's list of links or a table of contents), the tool always provides a check for detection
        18. Tester verifies that if there is not a statement associated with the content asserting that the Strategies for Reducing the Complexity of Content were considered, the tool always provides a check for detection
        19. Tester verifies that if the target of each link is not clearly identified, the tool always provides a check for detection
        20. Tester verifies that if graphical components that appear on multiple pages, including graphical links, are not associated with the same text equivalents wherever they appear, the tool always provides a check for detection
        21. Tester verifies that if components that appear visually on multiple pages, such as navigation bars, search forms, adn sections within the main content, are not displayed in the same location relative to other content on every page or screen where they appear, the tool always provides a check for detection
        22. Tester verifies that if when components such as navigation menus and search forms appear on multiple pages, users can not choose to have those elements presented in a different visual position or reading-order, the tool always provides a check for detection
        23. Tester verifies that if there are extreme changes of context, the tool always provides a check for detection
        24. Tester verifies that if technologies are not used according to specification without exception, the tool always provides a check for detection
        25. Tester verifies that if the Web resource does not include a list of the technologies user agents must support in order for tis content to work as intended (The list is documented in metadata is such metadata is supported by the format; otherwise it is documented in a policy statement associated with the content), the tool always provides a check for detection
        26. Tester verifies that if users who do not have one or more of these technologies can not access and use the resource, though the experience may be degraded, the tool always provides a check for detection
        27. Tester verifies that if technoloiges and features on the required list are not open standards or have a public specification , the tool always provides a check for detection

    10. Success Criteria 3.3: The tool must provide a repair (automated repair, semi-automated repair or manual repair) for correcting violations of each Level 3 requirement of WCAG2.0 (06/02/04 draft)(WCAG RELATIVE PRIORITY):

      Test Plan

      1. The tool designer defines how repairs (for detecting accessibility violations) are provided by the tool on a form
      2. The tool designer describes each repair (for detecting accessibility violations), and what kind it is, on a form
      3. Given this info, the author (tester) verifies each following item (or enters N/A as appropriate) on a form:
        1. Tester verifies that if a text document (for example, a movie script) is not always provided that includes all important visual information, dialogue, and other important sounds, the tool always provides a repair for correction
        2. Tester verifies that if in default presentation mode, text that is presented over a background does not always have a contrast greater than between the text and background as measured by ? or the resource does not always provide a mechanism to allow the text to meet this criterion, the tool always provides a repair for correction
        3. Tester verifies that if text is presented over a background image or pattern, or if a background image or pattern is present, the text not is easily readable when the content is viewed in grayscale to determine if the background makes it difficult to identify individual characters, the tool always provides a repair for correction
        4. Tester verifies that if not all of the functionality of the content is operable via a keyboard or keyboard interface, the tool always provides a check for detection
        5. Tester verifies that if the content has been designed in a way that any time limits in the content would not pass level 1, success criteria 1 fo this guidelines without exceptions , the tool always provides a repair for correction
        6. Tester verifies that if any non-emergency interruptions, such as the availability of updated content, can not be postponed and/or suppressed by the user, the tool always provides a repair for correction
        7. Tester verifies that content violates some of the Spatial Pattern Thresholds, the tool always provides a repair for correction
        8. Tester verifies that if information is not provided that would indicate at least one logical sequence in which to read a document, the tool always provides a repair for correction
        9. Tester verifies that if diagrams are not constructed so that they have structure that users can access, the tool always provides a repair for correction
        10. Tester verifies that if logical tab order has not been created, the tool always provides a repair for correction
        11. Tester verifies that if there is not a statement associated with the content asserting that items from the following list were considered :
          1. breaking up text into logical paragraphs
          2. dividing documents, especially very long ones, into hierarchical sections and subsections with clear and informative titles
          3. supplying an infomative title for each page or resource that can be accessed independently
          4. supplying a unique title for each page or resource that can be accessed independently
          5. revealing important non-hierarchical relationships, such as cross-references so that the relationships are represented unambiguously in the markup or data model
          the tool always provides a repair for correction
        12. Tester verifies that structural emphasis is not evident on at least the following displays: (a) black and white monitor, (b) low resolution screens, (c) "mono" audio playback devices, the tool always provides a repair for correction
        13. Tester verifies that if where the input options are known, there are less than 75 of them, and they can be provided without jeopardizing security, test validity, etc., then users are not allowed to select from a list of options as well as to enter text directly , the tool always provides a repair for correction
        14. Tester verifies that if checks for misspelled words are applied and correct spellings are suggested when text entry is required, the tool always provides a repair for correction
        15. Tester verifies that if the meaning of contracted words can not be programmatically determined, the tool always provides a repair for correction
        16. Tester verifies that if where a word has multiple meanings and the intended meaning is not the first in the associated disctionariy(s), then additional markup or another mechniasm is not provided for determining the correct meaning, the tool always provides a repair for correction
        17. Tester verifies that if section headings and link text are not understandabale when read by themselves as a group (for example, in a screen reader's list of links or a table of contents), the tool always provides a repair for correction
        18. Tester verifies that if there is not a statement associated with the content asserting that the Strategies for Reducing the Complexity of Content were considered, the tool always provides a repair for correction
        19. Tester verifies that if the target of each link is not clearly identified, the tool always provides a repair for correction
        20. Tester verifies that if graphical components that appear on multiple pages, including graphical links, are not associated with the same text equivalents wherever they appear, the tool always provides a repair for corrtection
        21. Tester verifies that if components that appear visually on multiple pages, such as navigation bars, search forms, adn sections within the main content, are not displayed in the same location relative to other content on every page or screen where they appear, the tool always provides a repair for correction
        22. Tester verifies that if when components such as navigation menus and search forms appear on multiple pages, users can not choose to have those elements presented in a different visual position or reading-order, the tool always provides a repair for correction
        23. Tester verifies that if there are extreme changes of context, the tool always provides a repair for correction
        24. Tester verifies that if technologies are not used according to specification without exception, the tool always provides a repair for correction
        25. Tester verifies that if the Web resource does not include a list of the technologies user agents must support in order for tis content to work as intended (The list is documented in metadata is such metadata is supported by the format; otherwise it is documented in a policy statement associated with the content), the tool always provides a repair for correction
        26. Tester verifies that if users who do not have one or more of these technologies can not access and use the resource, though the experience may be degraded, the tool always provides a repair for correction
        27. Tester verifies that if technoloiges and features on the required list are not open standards or have a public specification , the tool always provides a repair for correction

      References

      1. ATAG 2.0 WD 25 Jun 2004
      2. WCAG 2.0 WD 2 Jun 2004
      3. ISO 16071: 2002(E)