Test Plan for ATAG (based on 21 Oct 03 ATAG WD) - Dec 01 2003

Conformance Level A

  1. Success Criteria 1.1: The authoring interface must pass required elements of the Software Accessibility Guidelines testing criteria NOTE: Tool designer is assumed knowledgable of tool; author is one who uses tool (no prior knowledge of tool assumed, but can be the same as tool designer, and can be many authors performing tests on a single tool). Test plan may include combinations of machine-testing and human-controlled testing. "Form" below includes a public listing of pertinent test data (including test number, success criteria number, individuals involved, and results.)

    Test Plan

    1. tool designer lists capabilities of authoring interface on form (and other documentation necessary for author to use)
    2. author lists whether ISO16071 or IBM SAGs are referenced for testing on form
    3. author lists required elements of ISO16071 on form
    4. author lists required elements of SAG on form
    5. author lists which required elements are passed by the authoring interface on form

  2. Success Criteria 1.2: At least one editing method must pass the Software Accessibility Guidelines testing criteria for each element and object property editable by the tool

    Test Plan

    1. tool designer lists on form all editing methods to be considered (available) for the tool
    2. tool designer lists on form all elements editable by authoring tool
    3. tool designer lists on form all object properties editable by authoring tool
    4. author lists on form which testing criteria are used: ISO16071 or IBM SAG
    5. author tries an above-listed editing method involving before-referenced elements and object properties against selected testing criteria and lists on form which criteria are passed as well as the editing method used and the elements and object properties edited

  3. Success Criteria 1.3:
    1. All editing views must display text equivalents for any non-text content
    2. All editing view must either respect operating system display settings (for color, contrast, size, and font) or, from within the tool, provide a means of changing color, contrast, size, and font, without affecting the content markup)

    Test Plan

    1. Tool designer defines all editing views supported by the tool on form (as well as non-text content types supported?)
    2. Tool designer defines all possible operating system display settings for color, contrast, size and font supported by tool on form
    3. the above-mentioned editing views are each tested by author for random (all?) samples of non-text content to make sure text equivalent is generated for each sample, and places result on form
    4. the above-mentioned editing views are each tested by author against known settings for color, contrast, size, and font, if this choice is checked on form
    5. the authoring tool is tested by author to see if color, contrast, size, and font can each be changed (to known, testable verifiable) values on the above-mentioned editing views for a reference piece of content, if this choice is checked on form

  4. Success criteria 2.1: All markup string written automatically be the tool (i.e., not authored "by hand") must conform to the applicable markup language specification

    Test Plan

    1. on form, tool designer defines how markup strings conforming to markup language specifications are automatically generated by the tool
    2. on form, tool designer specifies which markup language specifications are supported by the tool
    3. under author control, authoring tool generates a series of markup strings automatically
    4. author checks that each markup string is verified against the appropriate language specification using defined mechanisms - author lists on form markup string generated and conformance verification for that string
    5. if no pre-existing mechanism defined for conformance, author will explain on form how each markup string conforms to each referenced specification mentioned before

  5. Success Criteria 2.2: In order to give priority to a format: that format must have a published techniques document for meeting each Level 1 WCAG (10/27/03 draft) success criteria (NOTE: Should this be a Relative Priority? Applicable to ATAG AA and AAA for WCAG levels 2 and 3?)

    Test Plan

    1. Tool designer lists on form formats supported by the tool
    2. Author verifies on form that all of the following are true for each supported format mentioned before (or N/A if not applicable?):
      1. For every referenced format, there exists a published techniques document for that format that indicates that non-text content that can be expressed in words has a text-equivalent explicitly associated with it.
      2. For every referenced format, there exists a published techniques document for that format that indicates that non-text content that can not be expressed in words has a descriptive label provided as its text-equivalent.
      3. For every referenced format, there exists a published techniques document for that format that indiates that an audio description is provided.
      4. For every referenced format, there exists a published techniques document for that format that indicates that all significant dialogue and sounds are captioned.
      5. For every referenced format, there exists a published techniques document for that format that indicates that descriptions and captions are synchronized with the events they represent.
      6. For every referenced format, there exists a published techniques document for that format that indicates that if the Web content is real-time video with audio, real-time captions are provided unless the content: is a music program that is primarily non-vocal
      7. For every referenced format, there exists a published techniques document for that format that indicates that if the Web content is real-time non-interactive video (e.g., a Webcam of ambient conditions), either provide an equivalent that conforms to items 1 and 2 of this list (e.g., an ongoing update of weather conditions) or link to an equivalent that conforms to items 1 and 2 of this list (e.g., a link to a weather Web site).
      8. For every referenced format, there exists a published techniques document for that format that indicatesthat if a pure audio or pure video presentation requires a user to respond interactively at specific times in the presentation, then a time-synchronized equivalent (audio, visual or text) presentation is provided
      9. For every referenced format, there exists a published techniques document for that format that indicates that the following can be derived programmatically (i.e. through a markup or data model that is assistive technology compatible) from the content without requiring interpretation of presentation:
        1. any hierarchical elements and relationships, such as headings, paragraphs and lists
        2. any non-hierarchical relationships between elements such as cross-references and linkages, associations between labels and controls, associations between cells and their headers, etc.
        3. any emphasis
      10. For every referenced format, there exists a published techniques document for that format that indicates that any information presented through color is also available without color
      11. For every referenced format, there exists a published techniques document for that format that indicates that text content is not presented over a background image or pattern OR the text is easily readable when the page is viewed in black and white
      12. For every referenced format, there exists a published techniques document for that format that indicates that text in the content is provided in Unicode or sufficient information is provided so that it can be automatically mapped back to Unicode.
      13. For every referenced format, there exists a published techniques document for that format that indicates that all of the functionality of the content, where the functionality or its outcome can be expressed concisely in words, is operable at a minimum through a keyboard or keyboard interface.
      14. For every referenced format, there exists a published techniques document for that format that indicates that content is designed so that time limits are not an essential part of interaction or at least one of the following is true for each time limit: the user is allowed to deactivate the time limits, or the user is allowed to adjust the time limit over a wide range which is at least 10 times the average user's preference, or the user is warned before time expires and given at least 10 seconds to extend the time limit, or the time limit is due to a real-time event (e.g. auction) and no alternative to the time limit is possible, or the time limit is part of a competitive activity where timing is an essential part of the activity (e.g. competitive gaming or time based testing).
      15. For every referenced format, there exists a published techniques document for that format that indicates that at least one of the following is true:
        1. content was not designed to flicker (or flash) in the range of 3 to 49 Hz
        2. if flicker is unavoidable, the user is warned of the flicker before they go to the page, and as close a version of the content as is possible without flicker is provided
      16. For every referenced format, there exists a published techniques document for that format that indicates that passages or fragments of text occurring within the content that are written in a language other than the primary natural language of the content as a whole, are identified, including specification of the language of the passage or fragment
      17. For every referenced format, there exists a published techniques document for that format that indicates that document attributes identify the natural language of the document.
      18. For every referenced format, there exists a published techniques document for that format that indicates that for markup, except where the site has documented that a specification was violated for backward compatibility, the markup has:
        1. passed validity tests of the language (whether it be conforming to a schema, Document Type Definition (DTD), or other tests described in the specification)
        2. structural elements and attributes are used as defined in the specification
        3. accessibility features are used
        4. deprecated features are avoided
      19. For every referenced format, there exists a published techniques document for that format that indicates that any custom user interface elements of the content conform to at least level A of the User Agent Accessibility Guidelines 1.0, and that if the custom user interfaces cannot be made accessible, an alternative solution is provided that meets WCAG2.0 to the level claimed.
    3. Author provides a link to each techniques document mentioned above on the form for each supported format

  6. Success Criteria 2.3: Tools must always meet at least one of the following:
    1. generate accessible content automatically
    2. provide a method for authoring "by hand"
    3. provide the author with accessible options for every authoring task

    Test Plan

    1. tool designer declares the capability of their authoring tool on a form by checking one/more of three boxes corresponding to three items in success criteria
    2. if appropriate item is checked, tool designer explains on form how function is accomplished
    3. author states on the form whether the tool successfully generates accessible content automatically, , if that item is checked on the form, and if so, what times it does this, OR
    4. author states on the form whether the tool successfully provides a method for authoring "by hand", , if that item is checked on the form, and if so, what times it does this, OR
    5. author states on the form whether the tool provides accessible options for every authoring task, , if that item is checked on the form, and if so, what times it does this
    6. for each of the above, author provides a description of the authoring task and the resulting content

  7. Success Criteria 2.4:
    1. During all transformations and conversions, any accessibility information must be preserved, unless prevented by limitations of the target format
    2. When accessibility information cannot be preserved during a conversion or transformation, the author must be notified beforehand.

    Test Plan

    1. tool designer defines all transformations and conversions possible using their tool on a form
    2. tool designer defines all accessibility information that can be provided by their tool on a form
    3. tool designer lists any limitations or restrictions imposed by the target format on a form
    4. author tries some sample transformations, and defines any accessibility information pertinent to that transformation on a form; if no accessibility information exists, it should be so stated
    5. author compares the defined accessibility information after to that before to make sure the same information exists, and presents the results on a form, for each transformation
    6. author would state on the form whether prior author notification was given if the previous tests gave negative results (prior meaning before the transformation was attempted)
    7. author verifies that if prior notification was made, author was given choice to abort transformation, and if author chose to abort, the transformation was in fact not attempted; this info is presented on a form

  8. Success Criteria 3.4:
    1. When the author inserts an unrecognized non-text object, the tool must not insert an automatically generated text equivalent (e.g. label generated from the file name)
    2. When the author inserts a non-text object for which the tool has a previously authored equivalent (i.e. created by the author, tool designer, pre-authored content developer, etc.), but the function of the object is not known with certainty, the tool must prompt the author to confirm insertion of the equivalent. However, where the function of the non-text object is known with certainty (e.g. "home button" on a navigation bar, etc.), the tool may automatically insert the equivalent.

    Test Plan

    1. tool designer defines capability of tool re: handling of unrecognized non-text objects on a form
    2. Author in editing using tool inserts unrecognized non-text object
    3. Author verifies for every such insertion that tool does not insert text equivalent and presents this info on form
    4. tool designer defines all known non-text objects for which text equivalents exists, and gives on a form the object, text equivalent (or link to), and function of the object, on a form
    5. author inserts non-text object on list mentioned before, verifies that tool automatically inserts correct text equivalent, and provides this info on a form
    6. author inserts non-text object not on list mentioned before, and verifies that tool does not insert equivalent, but prompts the author before equivalent is inserted; this info is provided on a form
    7. author verifies that for such prompting, if author accepts, tool does in fact insert text equivalent, and if author declines, the tool does not insert text equivalent; this info is provided on a form

  9. Success Criteria 3.7: All features that play a role in creating accessible content must be documented in the help system.

    Test Plan

    1. tool designer lists on form all features assisting accessibility of content generated by the tool, and where those features are in the help system
    2. tool designer lists on form nature of help system (documentation) for the tool, and how to use help system.
    3. author verifies on form that for every feature listed, feature is found in help system at the correct location checking that feature is included in the documentation.
    4. Author verifies on form for each feature a level of understanding of the feature's capabilities

  10. Success Criteria 4.2:
    1. Continuously active processes (e.g. a checker that underlines errors as they occur, a checker that activates at a save, a checker that every 10 minutes, etc.) that implement functions required by checkpoints 3.1, 3.2, 3.3, and 3.7 must be enabled by default
    2. If the author chooses to disable these continuously active processes, then the tool must inform the author of the consequences of their choice
    3. User-initiated processes (e.g. a checker that the user requests each time) that implement functions required by checkpoints 3.1, 3.2, 3.3 and 3.7 must be available to the author at "all times during authoring" with no more steps than other "high-priority functions"
    4. When the functions required by checkpoints 3.1, 3.2, 3.3 and 3.7 are combined with other authoring functions (e.g., an accessibility-related field in a general purpose dialog box), the resulting design must include all accessibility-related functions in the top level of the combined user interface

    Test Plan

    1. tool designer lists on form all continuously active processes implementing functions required by checkpoints 3.1, 3.2, 3.3, and 3.7 and supported by the tool, as well as how they operate
    2. author verifies on form that all such processes listed in fact work correctly as described
    3. tool designer lists on form how to disable any of previously-mentioned processes
    4. author verifies on form that for all such processes listed, author is given a choice by tool as to whether to disable using knowledge provided by tool designer, and that each choice gives consequences to the author
    5. author verifies on form that if choice is made to disable such a process (using information given previously), the process is successfully disabled
    6. author verifies on form that if choice is not made to disable such a process (using information given previously), the process is still enabled (works correctly)
    7. tool designer lists on form all user-initiated processes available in the tool supporting checkpoints 3.1, 3.2, 3.3, 3.7, the number of steps required to access each process, and the certification that the number of steps is less than or equal to other listed high-priority functions
    8. author verifies on form that when each of these processes is tried, they work correctly every time using the number of steps indicated by the tool designer
    9. tool designer lists on form all 3.1, 3.2, 3.3, 3.7 functions that may be combined with other authoring functions (list) supported by the tool
    10. tool designer lists on form characteristics of design of top level of user interface of the tool
    11. author verifies that all of 3.1, 3.2, 3.3, 3.7 functions are immediately available, locatable at the top level of user interface of the tool using previous information, and (work correctly?).

    ------------------------

  11. Success Criteria 2.5: All markup strings written automatically by the tool (i.e., not authored "by hand") must satisfy all of the WCAG2.0 (10/27/03 draft) Level 1 success criteria(RELATIVE PRIORITY):

    Test Plan

    1. Tool designer enters on form how markup strings are written by the authoring tool.
    2. Using this info, author (tester) tests all of the following (or enters N/A as appropriate) and enters results for each item on form:
      1. Tester verifies that for every markup string written automatically by the tool, non-text content that can be expressed in words has a text-equivalent explicitly associated with it.
      2. Tester verifies that for every markup string written automatically by the tool, non-text content that can not be expressed in words has a descriptive label provided as its text-equivalent.
      3. Tester verifies that for every markup string written automatically by the tool, an audio description is provided.
      4. Tester verifies that for every markup string written automatically by the tool, all significant dialogue and sounds are captioned.
      5. Tester verifies that for every markup string written automatically by the tool, descriptions and captions are synchronized with the events they represent.
      6. Tester verifies that for every markup string written automatically by the tool, if the Web content is real-time video with audio, real-time captions are provided unless the content: is a music program that is primarily non-vocal
      7. Tester verifies that for every markup string written automatically by the tool, if the Web content is real-time non-interactive video (e.g., a Webcam of ambient conditions), either provide an equivalent that conforms to items 1 and 2 of this list (e.g., an ongoing update of weather conditions) or link to an equivalent that conforms to items 1 and 2 of this list (e.g., a link to a weather Web site).
      8. Tester verifies that for every markup string written automatically by the tool, if a pure audio or pure video presentation requires a user to respond interactively at specific times in the presentation, then a time-synchronized equivalent (audio, visual or text) presentation is provided
      9. Tester verifies that for every markup string written automatically by the tool, the following can be derived programmatically (i.e. through a markup or data model that is assistive technology compatible) from the content without requiring interpretation of presentation:
        1. any hierarchical elements and relationships, such as headings, paragraphs and lists
        2. any non-hierarchical relationships between elements such as cross-references and linkages, associations between labels and controls, associations between cells and their headers, etc.
        3. any emphasis
      10. Tester verifies that for every markup string written automatically by the tool, any information presented through color is also available without color
      11. Tester verifies that for every markup string written automatically by the tool, text content is not presented over a background image or pattern OR the text is easily readable when the page is viewed in black and white
      12. Tester verifies that for every markup string written automatically by the tool, text in the content is provided in Unicode or sufficient information is provided so that it can be automatically mapped back to Unicode.
      13. Tester verifies that for every markup string written automatically by the tool, all of the functionality of the content, where the functionality or its outcome can be expressed concisely in words, is operable at a minimum through a keyboard or keyboard interface.
      14. Tester verifies that for every markup string written automatically by the tool, content is designed so that time limits are not an essential part of interaction or at least one of the following is true for each time limit: the user is allowed to deactivate the time limits, or the user is allowed to adjust the time limit over a wide range which is at least 10 times the average user's preference, or the user is warned before time expires and given at least 10 seconds to extend the time limit, or the time limit is due to a real-time event (e.g. auction) and no alternative to the time limit is possible, or the time limit is part of a competitive activity where timing is an essential part of the activity (e.g. competitive gaming or time based testing).
      15. Tester verifies that for every markup string written automatically by the tool, at least one of the following is true:
        1. content was not designed to flicker (or flash) in the range of 3 to 49 Hz
        2. if flicker is unavoidable, the user is warned of the flicker before they go to the page, and as close a version of the content as is possible without flicker is provided
      16. Tester verifies that for every markup string written automatically by the tool, passages or fragments of text occurring within the content that are written in a language other than the primary natural language of the content as a whole, are identified, including specification of the language of the passage or fragment
      17. Tester verifies that for every markup string written automatically by the tool, document attributes identify the natural language of the document.
      18. Tester verifies that for every markup string written automatically by the tool, for markup, except where the site has documented that a specification was violated for backward compatibility, the markup has:
        1. passed validity tests of the language (whether it be conforming to a schema, Document Type Definition (DTD), or other tests described in the specification)
        2. structural elements and attributes are used as defined in the specification
        3. accessibility features are used
        4. deprecated features are avoided
      19. Tester verifies that for every markup string written automatically by the tool, any custom user interface elements of the content conform to at least level A of the User Agent Accessibility Guidelines 1.0, and that if the custom user interfaces cannot be made accessible, an alternative solution is provided that meets WCAG2.0 to the level claimed.

  12. Success Criteria 2.6: Any web content (e.g., templates, clip art, multimedia objects, scripts, applets, example pages, etc.) preferentially licensed (i.e., better terms for users of tool then for other) for users of the tool, must satisfy all of the Level 1 WCAG2.0 (10/27/03 draft) success criteria (RELATIVE PRIORITY):

    Test Plan

    1. Tool designer defines all content preferentially licensed for the tool (and what preferential licensing means) on a form
    2. Using this info, author (tester) tests all of the following and enters result for each (or N/A as appropriate) on a form:
      1. Tester verifies that for every instance of such content, non-text content that can be expressed in words has a text-equivalent explicitly associated with it.
      2. Tester verifies that for every instance of such content, non-text content that can not be expressed in words has a descriptive label provided as its text-equivalent.
      3. Tester verifies that for every instance of such content, an audio description is provided.
      4. Tester verifies that for every instance of such content, all significant dialogue and sounds are captioned.
      5. Tester verifies that for every instance of such content, descriptions and captions are synchronized with the events they represent.
      6. Tester verifies that for every instance of such content, if the Web content is real-time video with audio, real-time captions are provided unless the content: is a music program that is primarily non-vocal
      7. Tester verifies that for every instance of such content, if the Web content is real-time non-interactive video (e.g., a Webcam of ambient conditions), either provide an equivalent that conforms to items 1 and 2 of this list (e.g., an ongoing update of weather conditions) or link to an equivalent that conforms to items 1 and 2 of this list (e.g., a link to a weather Web site).
      8. Tester verifies that for every instance of such content, if a pure audio or pure video presentation requires a user to respond interactively at specific times in the presentation, then a time-synchronized equivalent (audio, visual or text) presentation is provided
      9. Tester verifies that for every instance of such content, the following can be derived programmatically (i.e. through a markup or data model that is assistive technology compatible) from the content without requiring interpretation of presentation:
        1. any hierarchical elements and relationships, such as headings, paragraphs and lists
        2. any non-hierarchical relationships between elements such as cross-references and linkages, associations between labels and controls, associations between cells and their headers, etc.
        3. any emphasis
      10. Tester verifies that for every instance of such content, any information presented through color is also available without color
      11. Tester verifies that for every instance of such content, text content is not presented over a background image or pattern OR the text is easily readable when the page is viewed in black and white
      12. Tester verifies that for every instance of such content, text in the content is provided in Unicode or sufficient information is provided so that it can be automatically mapped back to Unicode.
      13. Tester verifies that for every instance of such content, all of the functionality of the content, where the functionality or its outcome can be expressed concisely in words, is operable at a minimum through a keyboard or keyboard interface.
      14. Tester verifies that for every instance of such content, content is designed so that time limits are not an essential part of interaction or at least one of the following is true for each time limit: the user is allowed to deactivate the time limits, or the user is allowed to adjust the time limit over a wide range which is at least 10 times the average user's preference, or the user is warned before time expires and given at least 10 seconds to extend the time limit, or the time limit is due to a real-time event (e.g. auction) and no alternative to the time limit is possible, or the time limit is part of a competitive activity where timing is an essential part of the activity (e.g. competitive gaming or time based testing).
      15. Tester verifies that for every instance of such content, at least one of the following is true:
        1. content was not designed to flicker (or flash) in the range of 3 to 49 Hz
        2. if flicker is unavoidable, the user is warned of the flicker before they go to the page, and as close a version of the content as is possible without flicker is provided
      16. Tester verifies that for every instance of such content, passages or fragments of text occurring within the content that are written in a language other than the primary natural language of the content as a whole, are identified, including specification of the language of the passage or fragment
      17. Tester verifies that for every instance of such content, document attributes identify the natural language of the document.
      18. Tester verifies that for every instance of such content, for markup, except where the site has documented that a specification was violated for backward compatibility, the markup has:
        1. passed validity tests of the language (whether it be conforming to a schema, Document Type Definition (DTD), or other tests described in the specification)
        2. structural elements and attributes are used as defined in the specification
        3. accessibility features are used
        4. deprecated features are avoided
      19. Tester verifies that for every instance of such content, any custom user interface elements of the content conform to at least level A of the User Agent Accessibility Guidelines 1.0, and that if the custom user interfaces cannot be made accessible, an alternative solution is provided that meets WCAG2.0 to the level claimed.

  13. Success Criteria 3.1:
    1. When the actions of the author risk creating accessibility problems (not satisfying any of the WCAG2.0 (10/27/03) Level 1 success criteria), the tool must intervene to introduce the appropriate accessible authoring practice. This intervention may proceed according to a user-configurable schedule.
    2. The intervention must occur at least once before completion of authoring (e.g., final save, publishing, etc.)
    (RELATIVE PRIORITY):

    Test Plan

    1. tool designer describes all intervention features of tool and accessibility issues prompting such intervention on a form
    2. tool designer describes how a user could configure the schedule of intervention
    3. Using this info, author tests all of the following and enters results on a form (or enters N/A as appropriate for each item) (NOTE: x is a small number that is assumed always to occur before completion of authoring):
      1. If non-text content that can be expressed in words does not have a text-equivalent explicitly associated with it, the author is notified within x seconds of the event.
      2. If non-text content that can not be expressed in words does not have a descriptive label provided as its text-equivalent. the author is notified within x seconds of the event.
      3. If it is determined that an audio description is not provided of all significant visual information in scenes, actions, and events that cannot be perceived from the sound track alone to the extent possible given the constraints posed by the existing audio track and limitations on freezing the audio visual program to insert additional auditory description, the author is notified within x seconds of the determination.
      4. If it is determined that all significant dialogue and sounds are captioned.(EXCEPTION: If the Web content is real-time and audio-only and not time-sensitive and not interactive a transcript or other non-audio equivalent is sufficient), then the author is notified within x seconds of the event.
      5. If it is determined that descriptions and captions are not synchronized with the events they represent, then the author is notified within x seconds of the event.
      6. If it is determined that if the Web content is real-time video with audio, real-time captions are not provided (unless the content: is a music program that is primarily non-vocal), then the author is notified within x seconds of the event.
      7. If it is determined that if the Web content is real-time non-interactive video (e.g., a Webcam of ambient conditions), neither is provided: an equivalent that conforms to items 1 and 2 of this list (e.g., an ongoing update of weather conditions) or link to an equivalent that conforms to items 1 and 2 of this list (e.g., a link to a weather Web site), the author is notified x seconds after the event.
      8. If it is determined that if a pure audio or pure video presentation requires a user to respond interactively at specific times in the presentation, then a time-synchronized equivalent (audio, visual or text) presentation is not provided, then the author is notified within x seconds of the event.
      9. If it is determined that the following can not be derived programmatically (i.e. through a markup or data model that is assistive technology compatible) from the content without requiring interpretation of presentation, then the author is notified within x seconds after the event.
        1. any hierarchical elements and relationships, such as headings, paragraphs and lists
        2. any non-hierarchical relationships between elements such as cross-references and linkages, associations between labels and controls, associations between cells and their headers, etc.
        3. any emphasis
      10. If it is determined that any information presented through color is not available without color, the author is notified x seconds after the determination.
      11. If it is determined that text content is presented over a background image or pattern and the text is not easily readable when the page is viewed in black and white, the author is notified within x seconds of the determination.
      12. If it is determined that text in the content is not provided in Unicode or sufficient information is not provided so that it can be automatically mapped back to Unicode, then the author is notified x seconds after the event.
      13. If it is determined that all of the functionality of the content, where the functionality or its outcome can be expressed concisely in words, is not operable at a minimum through a keyboard or keyboard interface, the author is notified x seconds after the event.
      14. If it is determined that content is not designed so that time limits are not an essential part of interaction and at least one of the following is not true for each time limit, the author is notified x seconds after the event. the user is allowed to deactivate the time limits, or the user is allowed to adjust the time limit over a wide range which is at least 10 times the average user's preference, or the user is warned before time expires and given at least 10 seconds to extend the time limit, or the time limit is due to a real-time event (e.g. auction) and no alternative to the time limit is possible, or the time limit is part of a competitive activity where timing is an essential part of the activity (e.g. competitive gaming or time based testing).
      15. If it is determined that at least one of the following is not true, then the author is notified x seconds after the event:
        1. content was not designed to flicker (or flash) in the range of 3 to 49 Hz
        2. if flicker is unavoidable, the user is warned of the flicker before they go to the page, and as close a version of the content as is possible without flicker is provide
      16. If it is determined that passages or fragments of text occurring within the content that are written in a language other than the primary natural language of the content as a whole, are not identified, including specification of the language of the passage or fragment, then the author is notified x seconds after the event.
      17. If it is determined that document attributes do not ifentify the natural language of the document, the author is notified x seconds after the determination.
      18. If it is determined that for markup, except where the site has documented that a specification was violated for backward compatibility, the markup has not satisfied items below, the author is notified x seconds after the event:
        1. passed validity tests of the language (whether it be conforming to a schema, Document Type Definition (DTD), or other tests described in the specification)
        2. structural elements and attributes are used as defined in the specification
        3. accessibility features are used
        4. deprecated features are avoided
      19. If it is determined that any custom user interface elements fo the content do not conform to at least Level A of the User Agent Accessibility Guidelines 1.0, and if it is determined that if the custom user interfaces cannot be made accessible, an alternative solution is not provided that meets WCAG2.0 (including this provision) to the level claimed, the author is notified x seconds after the determination. within x seconds of the event.

  14. Success Criteria 3.2: The tool must provide a check (automated check, semi-automated check or manual check) for detecting violations of each Level 1 success criteria of WCAG2.0 (10/27/03 draft)(RELATIVE PRIORITY):

    Test Plan

    1. tool designer describes on form all checks supported by the tool for violation detection, what kind of check each is, and how authors would recognize or use info provided by each check
    2. given this info, author tests each following item and enters results on a form (or N/A for items as appropriate):
      1. If non-text content that can be expressed in words does not have a text-equivalent explicitly associated with it, tester verifies that the tool provides a check for this.
      2. If non-text content that can not be expressed in words does not have a descriptive label provided as its text-equivalent, tester verifies that the tool provides a check for this.
      3. If an audio description is not provided, the tester verifies that a check is provided by the tool for this.
      4. If all significant dialogue and sounds are not captioned, the tester verifies that a check is provided by the tool for this.
      5. If descriptions and captions are not synchronized with the events they represent, the tester verifies that a check is provided by the tool for this.
      6. If the Web content is real-time video with audio, real-time captions are not provided (unless the content is a music program that is primarily non-vocal), the tester verifies that a check is provided by the tool for this.
      7. If the Web content is real-time non-interactive video (e.g., a Webcam of ambient conditions), neither is provided: an equivalent that conforms to items 1 and 2 of this list (e.g., an ongoing update of weather conditions) or link to an equivalent that conforms to items 1 and 2 of this list (e.g., a link to a weather Web site), the tester verifies that a check is provided by the tool for this.
      8. If a pure audio or pure video presentation requires a user to respond interactively at specific times in the presentation, then a time-synchronized equivalent (audio, visual or text) presentation is not provided, the tester verifies that a check is provided by the tool for this.
      9. If the following can not be derived programmatically (i.e. through a markup or data model that is assistive technology compatible) from the content without requiring interpretation of presentation, the tester verifies that a check is provided by the tool:
        1. any hierarchical elements and relationships, such as headings, paragraphs and lists
        2. any non-hierarchical relationships between elements such as cross-references and linkages, associations between labels and controls, associations between cells and their headers, etc.
        3. any emphasis
      10. If any information presented through color is not available without color, the tester verifies that a check is provided by the tool for this.
      11. If text content is not presented over a background image or pattern OR the text is not easily readable when the page is viewed in black and white, the tester verifies that a check is provided by the tool.
      12. If text in the content is not provided in Unicode or sufficient information is not provided so that it can be automatically mapped back to Unicode, the tester verifies that a check is provided by the tool for this.
      13. If all of the functionality of the content, where the functionality or its outcome can be expressed concisely in words, is not operable at a minimum through a keyboard or keyboard interface, the tester verifites that a check is provided by the tool for this.
      14. If content is designed so that time limits are not an essential part of interaction or none of the following is true for each time limit: the user is allowed to deactivate the time limits, or the user is allowed to adjust the time limit over a wide range which is at least 10 times the average user's preference, or the user is warned before time expires and given at least 10 seconds to extend the time limit, or the time limit is due to a real-time event (e.g. auction) and no alternative to the time limit is possible, or the time limit is part of a competitive activity where timing is an essential part of the activity (e.g. competitive gaming or time based testing), the tester verifies that a check is provided for this.
      15. If none of the following is true, the tester verifies that a check is provided for this:
        1. content was not designed to flicker (or flash) in the range of 3 to 49 Hz
        2. if flicker is unavoidable, the user is warned of the flicker before they go to the page, and as close a version of the content as is possible without flicker is provided
      16. If passages or fragments of text occurring within the content that are written in a language other than the primary natural language of the content as a whole, are not identified, including specification of the language of the passage or fragment, the tester verifies that a check is provided for this.
      17. If document attributes do not identify the natural language of the document, the tester verifies that a check is provided for this.
      18. If for markup, except where the site has documented that a specification was violated for backward compatibility, the markup has not done any below, the tester verifies that a check is made for this:
        1. passed validity tests of the language (whether it be conforming to a schema, Document Type Definition (DTD), or other tests described in the specification)
        2. structural elements and attributes are used as defined in the specification
        3. accessibility features are used
        4. deprecated features are avoided
      19. If any custom user interface elements of the content do not conform to at least level A of the User Agent Accessibility Guidelines 1.0, and if when the custom user interfaces cannot be made accessible, an alternative solution is not provided that meets WCAG2.0 to the level claimed, the tester verifies that a check is made by the tool for this.

  15. Success Criteria 3.3: The tool must provide a repair (automated repair, semi-automated repair or manual repair) for correcting violations of each Level 1 success criteria of WCAG2.0 (10/27/03 draft)(RELATIVE PRIORITY):

    Test Plan

    1. tool designer describes on form all repairs provided for violation correction by the tool, and what kind of repair it is
    2. given this info, author tests each item below to ensure repair successfully made (or enters N/A for item as appropriate), and enters results on form:
      1. If non-text content that can be expressed in words does not have a text-equivalent explicitly associated with it, tester verifies that the tool successfully repairs this problem.
      2. If non-text content that can not be expressed in words does not have a descriptive label provided as its text-equivalent, tester verifies that the tool successfully repairs this problem.
      3. If an audio description is not provided, the tester verifies that a check is provided by the tool for this.
      4. If all significant dialogue and sounds are not captioned, the tester verifies that the tool successfully repairs this problem.
      5. If descriptions and captions are not synchronized with the events they represent, the tester verifies that the tool successfully repairs this problem.
      6. If the Web content is real-time video with audio, real-time captions are not provided (unless the content is a music program that is primarily non-vocal), the tester verifies that the tool successfully repairs this problem.
      7. If the Web content is real-time non-interactive video (e.g., a Webcam of ambient conditions), neither is provided: an equivalent that conforms to items 1 and 2 of this list (e.g., an ongoing update of weather conditions) or link to an equivalent that conforms to items 1 and 2 of this list (e.g., a link to a weather Web site), the tester verifies that the tool successfully repairs this problem.
      8. If a pure audio or pure video presentation requires a user to respond interactively at specific times in the presentation, then a time-synchronized equivalent (audio, visual or text) presentation is not provided, the tester verifies that the tool successfully repairs this problem.
      9. If the following can not be derived programmatically (i.e. through a markup or data model that is assistive technology compatible) from the content without requiring interpretation of presentation, the tester verifies that a successful repair is provided by the tool:
        1. any hierarchical elements and relationships, such as headings, paragraphs and lists
        2. any non-hierarchical relationships between elements such as cross-references and linkages, associations between labels and controls, associations between cells and their headers, etc.
        3. any emphasis
      10. If any information presented through color is not available without color, the tester verifies that the tool successfully repairs this problem.
      11. If text content is not presented over a background image or pattern OR the text is not easily readable when the page is viewed in black and white, the tester verifies that the tool successfully repairs this problem.
      12. If text in the content is not provided in Unicode or sufficient information is not provided so that it can be automatically mapped back to Unicode, the tester verifies that the tool successfully repairs this problem.
      13. If all of the functionality of the content, where the functionality or its outcome can be expressed concisely in words, is not operable at a minimum through a keyboard or keyboard interface, the tester verifites that the tool successfully repairs this problem.
      14. If content is designed so that time limits are not an essential part of interaction or none of the following is true for each time limit: the user is allowed to deactivate the time limits, or the user is allowed to adjust the time limit over a wide range which is at least 10 times the average user's preference, or the user is warned before time expires and given at least 10 seconds to extend the time limit, or the time limit is due to a real-time event (e.g. auction) and no alternative to the time limit is possible, or the time limit is part of a competitive activity where timing is an essential part of the activity (e.g. competitive gaming or time based testing), then the tester verifies that the tool successfully repairs this problem.
      15. If none of the following is true, the tester verifies that the tool successfully repairs this:
        1. content was not designed to flicker (or flash) in the range of 3 to 49 Hz
        2. if flicker is unavoidable, the user is warned of the flicker before they go to the page, and as close a version of the content as is possible without flicker is provided
      16. If passages or fragments of text occurring within the content that are written in a language other than the primary natural language of the content as a whole, are not identified, including specification of the language of the passage or fragment, the tester verifies that the tool successfully repairs this.
      17. If document attributes do not identify the natural language of the document, the tester verifies that the tool successfully repairs this.
      18. If for markup, except where the site has documented that a specification was violated for backward compatibility, the markup has not done any below, the tester verifies that a successful repair is made by the tool for this:
        1. passed validity tests of the language (whether it be conforming to a schema, Document Type Definition (DTD), or other tests described in the specification)
        2. structural elements and attributes are used as defined in the specification
        3. accessibility features are used
        4. deprecated features are avoided
      19. If any custom user interface elements of the content do not conform to at least level A of the User Agent Accessibility Guidelines 1.0, and if when the custom user interfaces cannot be made accessible, an alternative solution is not provided that meets WCAG2.0 to the level claimed, the tester verifies that a successful repair is made by the tool for this problem.

--------------------------------------------------------------------------------------------------

Conformance Level AA (after doing Conformance Level A)

  1. Success Criteria 1.4:
    1. In any element hierarchy, the author must be able to move editing focus from any structural element to any element immediately above, immediately below, immediately below or in the same level in the hierarchy
    2. In any element hierarchy, the author must be able to select, copy, cut and past any element with its content

    Test Plan

    1. Tool designer defines on form how editing focus is moved for the tool (and document structures supported?)
    2. For every such hierarchy and editing focus, the tester (author) verifies that focus is correctly moved to element immediately above in hierarchy and enters results on form
    3. For every such hierarchy and editing focus, the tester (author) verifies that focus is correctly moved to element immediately below in hierarchy and enters results on form
    4. For every such hierarchy and editing focus, the tester (author) verifies that focus is correctly moved to element immediately to left or right in hierarchy and enters results on form
    5. Tool designer defines element hierarchies supported on form?
    6. Given such an element hierarchy, the tester (author) verifies that every element (with its content) in that hierarchy can be selected by the tool, and enters results on form.
    7. Given such an element hierarchy, the tester (author) verifies that every element (with its content) in that hierarchy can be copied by the tool, and enters results on form.
    8. Given such an element hierarchy, the tester (author) verifies that every element (with its content) in that hierarchy can be cut by the tool, and enters results on form.
    9. Given such an element hierarchy, the tester (author) verifies that every element (with its content) in that hierarchy can be pasted by the tool, and enters results on form.

  2. Success criteria 1.5:
    1. The authoring tool must have a search function for all editing views
    2. The author must be able to search for text within all text equivalents of any rendered non-text content
    3. The author must be able to specify whether to search content, markup, or both

    Test Plan

    1. The tool designer defines all editing views supported by the tool, and all search functions supported by the tool, on a form
    2. Given this information, the tester (author) verifies that at least one search function exists for a sample of editing views as defined above, and enters results on form
    3. Given this information, the tester (author) verifies that all samples of text within every text equivelent are searched for successfully, and enters results on form
    4. Given this information, the author verifies that the author is successfully able to distinguish searching for content vs. searching for markup, and enters results on a form.
    5. Given this information, the tester (author) verifies that all searching mentioned previously is performed successfully, and enters results on a form.

  3. Success Criteria 2.7: When unrecognized markup (e.g. external entity, unrecognized element or attribute name) is detected, the tool must query the author for consent to modify the markup. If the author refuses, and the markup cannot be processed, the tool must refuse to open the markup for editing.

    Test Plan

    1. The tool designer defines what unrecognized markup is in the context of the tool (documentation) on a form
    2. The tool designer defines how unrecognized markup is detected by the tool, and what actions the tool takes in those instances, on a form
    3. Given this information, the tester (author) verifies that a query is always successfully directed at the author in every such instance, and enters results on a form.
    4. Given this information, the tester (author) verifies that each such query successfully gives the author a valid choice as to whether to modify the markup, and enters results on form.
    5. Given this information, the tester (author) verifies that each time the author refuses such a query, the tool does not open the markup for editing, and enters the results on form.
    6. Given this information, the tester (author) verifies that each time the author accepts such a query, the tool does open the markup for editing, and enters the results on form.

  4. Success Criteria 3.8: All examples of markup code and views of the user interface (dialog screenshots, etc.) must satisfy all Level 2 success criteria below of WCAG2.0 (10/27/03 draft), regardless of whether the examples are intended to demonstrate accessibility authoring practices (NOTE: Should this be relative priority since it refers to WCAG?):

    Test Plan

    1. The tool designer defines all types of markup code and all views of the user interface supported by the tool on a form
    2. Given this information, author (tester) successfully verifies each following item (or N/A) and enters results on a form:
      1. The tester verifies that subject to previous info non-text content that cannot be expressed in words always has a text equivalent for all aspects that can be expressed in words
      2. The tester verifies that subject to previous info a text document that merge all audio descriptions and captions into a collated script is always provided.
      3. The tester verifies that subject to previous info captions and audio descriptiosn are always provided for all live broadcasts.
      4. The tester verifies that subject to previous info any information presented using color is always available without color and without having to interpret markup.
      5. The tester verifies that subject to previous info all abbreviations and acronyms are clearly identified each time they occur if they collide with a word in the standard language that would also logically appear in the same case
      6. The tester verifies that subject to previous info all symbols such as diacritic marks that are found in standard usage of the natural language of the content, and that are necessary for unambiguous identification of words, are always present or anoth standard mechanism for disambiguation is always provided.
      7. The tester verifies that subject to previous info all structural elements present have a different visual appearance or auditory characteristic from each other and from body text.
      8. The tester verifies that subject to previous info all text that is presneted over a background color or grayscale has a mechanism that allows the text to be presented in a fashion that has a "large" contrast between text and background color.
      9. The tester verifies that subject to previous info all audio content does not contain background sounds or the background sounds are at least 20db lower than the foreground audio content.
      10. The tester verifies that subject to previous info that werever a choice between event handlers is available and supported, the more abstract event is always used.
      11. The tester verifies that subject to previous info that any blinking content can always be turned off.
      12. The tester verifies that subject to previous info that any moving content can always be paused.
      13. The tester verifies that subject to previous info all animation or other content does not visibly or purposely flicker between 3 and 49Hz.
      14. The tester verifies that subject to previous info all content that might create a problem has been tested, and only pages with unavoidable flicker remain and appropriate warnings along with a close alternative presentation have been provided for these pages.
      15. The tester verifies that subject to previous info that in all documents greater than 50000 words or all sites larger than 50 perceived pages, at least one of the following is provided: hierarchical structure markup, table of contents, and alternate display orders
      16. The tester verifies that subject to previous info users are always able to skip over large blocks of repetitive material, navigational bars or other blocks of links that are greater than 7 when reading with a synthesized or navigating using keyboard.
      17. The tester verifies that subject to previous info if an error is detected, feedback is always provided to the user identifying the error.
      18. The tester verifies that subject to previous info all acronyms and abbreviations do not appear first in standard unabridged dictionaries for the language, or are always defined the first time they appear, or are always available in a glossary on the site.
      19. The tester verifies that subject to previous info all content has been reviewed, taking into account the following strategies for evaluating the complexity of content, applying as appropriate: familiarity of terms and language structure, reasonableness of length and complexity of sentences, coherence of paragraphs (and sensibility in length), clarity of headings and linked text when read out of context, accuracy and uniqueness of page titles, care in the use of all-capital letters where normal sentences case might increase comprehension, inclusion of non-text content to supplement text for key pages or sections of the site where appropriate.
      20. The tester verifies that subject to previous info all key orientation adn navigational elements are generally found in one or two consistent locations or their locations are always otherwise predictable
      21. The tester verifies that subject to previous info that hwere inconsistent or unpredictable responses are essential to the function of the content, the user is always warned in advance of encountering them
      22. The tester verifies that subject to previous info wherever there are extreme changes in context, one of the following is always true: an easy- to-find setting, that persists for the site visit, is provided fot he user to deactivate processes or features that cause extreme changes in context, or extreme changes in context are identified before they occur so they can be prepared for the change
      23. The tester verifies that subject to previous info for all markup, the markup has: passed validity tests of the language, structural elements and attributes are used as defined in the specification, accessbility features are used, and deprecated features are avoided
      24. The tester verifies that subject to previous info all accessibility conventions of the markup or programming language are always used
      25. The tester verifies that subject to previous info all relevant interfaces have been tested using a variety of assistive technologies (and preferably real people with disabilities) to determine that those assistive technologies are always able to acccess all information on the page or hidden within the page
      26. The tester verifies that subject to previous info all applicable Web resources include a list of the technologies users must have in order for its content to work as intended

  5. Success Criteria 4.1:
    1. When a tool provides a means for markup to be added with a single mouse click or keystroke, that markup must satisfy all of the Level 2 success criteria of WCAG2.0 (10/27/03 draft) unless the markup was authored "by hand".
    2. When an authoring action has several markup implementations (e.g., changing the color of text with presentation markup or style sheets), those markup implementations that satisfy all of the Level 2 success criteria of WCAG2.0 (10/28/03 draft) must be equal to or higher on all of the following scales than those markup implementations that do not meet the above WCAG2.0 requirements:
      1. prominence of location (in "power tools" such as floating menus, toolbars, etc.)
      2. position in layout (top to bottom and left to right in menus, dialog boxes, etc.)
      3. size of control (measured as screen area)
      4. actions to activate (number of mouse clicks or keystrokes)
    (NOTE: Should this be relative priority?)

    Test Plan:

    1. The tool designer defines all ways supported by the tool of adding markup with a single mouse click or keystroke, and enters results on form
    2. Given this information (for each way and all markup examples), author (tester) successfully verifies each following item (or N/A as appropriate) and enters results on a form:
      1. The tester verifies that subject to previous info non-text content that cannot be expressed in words always has a text equivalent for all aspects that can be expressed in words
      2. The tester verifies that subject to previous info a text document that merge all audio descriptions and captions into a collated script is always provided.
      3. The tester verifies that subject to previous info captions and audio descriptiosn are always provided for all live broadcasts.
      4. The tester verifies that subject to previous info any information presented using color is always available without color and without having to interpret markup.
      5. The tester verifies that subject to previous info all abbreviations and acronyms are clearly identified each time they occur if they collide with a word in the standard language that would also logically appear in the same case
      6. The tester verifies that subject to previous info all symbols such as diacritic marks that are found in standard usage of the natural language of the content, and that are necessary for unambiguous identification of words, are always present or anoth standard mechanism for disambiguation is always provided.
      7. The tester verifies that subject to previous info all structural elements present have a different visual appearance or auditory characteristic from each other and from body text.
      8. The tester verifies that subject to previous info all text that is presneted over a background color or grayscale has a mechanism that allows the text to be presented in a fashion that has a "large" contrast between text and background color.
      9. The tester verifies that subject to previous info all audio content does not contain background sounds or the background sounds are at least 20db lower than the foreground audio content.
      10. The tester verifies that subject to previous info that werever a choice between event handlers is available and supported, the more abstract event is always used.
      11. The tester verifies that subject to previous info that any blinking content can always be turned off.
      12. The tester verifies that subject to previous info that any moving content can always be paused.
      13. The tester verifies that subject to previous info all animation or other content does not visibly or purposely flicker between 3 and 49Hz.
      14. The tester verifies that subject to previous info all content that might create a problem has been tested, and only pages with unavoidable flicker remain and appropriate warnings along with a close alternative presentation have been provided for these pages.
      15. The tester verifies that subject to previous info that in all documents greater than 50000 words or all sites larger than 50 perceived pages, at least one of the following is provided: hierarchical structure markup, table of contents, and alternate display orders
      16. The tester verifies that subject to previous info users are always able to skip over large blocks of repetitive material, navigational bars or other blocks of links that are greater than 7 when reading with a synthesized or navigating using keyboard.
      17. The tester verifies that subject to previous info if an error is detected, feedback is always provided to the user identifying the error.
      18. The tester verifies that subject to previous info all acronyms and abbreviations do not appear first in standard unabridged dictionaries for the language, or are always defined the first time they appear, or are always available in a glossary on the site.
      19. The tester verifies that subject to previous info all content has been reviewed, taking into account the following strategies for evaluating the complexity of content, applying as appropriate: familiarity of terms and language structure, reasonableness of length and complexity of sentences, coherence of paragraphs (and sensibility in length), clarity of headings and linked text when read out of context, accuracy and uniqueness of page titles, care in the use of all-capital letters where normal sentences case might increase comprehension, inclusion of non-text content to supplement text for key pages or sections of the site where appropriate.
      20. The tester verifies that subject to previous info all key orientation adn navigational elements are generally found in one or two consistent locations or their locations are always otherwise predictable
      21. The tester verifies that subject to previous info that hwere inconsistent or unpredictable responses are essential to the function of the content, the user is always warned in advance of encountering them
      22. The tester verifies that subject to previous info wherever there are extreme changes in context, one of the following is always true: an easy- to-find setting, that persists for the site visit, is provided fot he user to deactivate processes or features that cause extreme changes in context, or extreme changes in context are identified before they occur so they can be prepared for the change
      23. The tester verifies that subject to previous info for all markup, the markup has: passed validity tests of the language, structural elements and attributes are used as defined in the specification, accessbility features are used, and deprecated features are avoided
      24. The tester verifies that subject to previous info all accessibility conventions of the markup or programming language are always used
      25. The tester verifies that subject to previous info all relevant interfaces have been tested using a variety of assistive technologies (and preferably real people with disabilities) to determine that those assistive technologies are always able to acccess all information on the page or hidden within the page
      26. The tester verifies that subject to previous info all applicable Web resources include a list of the technologies users must have in order for its content to work as intended
    3. The tool designer defines all markup implementations supported by the tool on a form
    4. The author identifies authoring actions supporting WCAG-compliant markup implementations of the tool
    5. The author identifies authoring actions supporting non WCAG-compliant markup implementations of the tool
    6. The tester verifies that every markup implementation on the WCAG list is "better" than every implementation on the non-WCAG list in terms of prominence of location
    7. The tester verifies that every markup implementation on the WCAG list is "better" than every implementation on the non-WCAG list in terms of position in layout
    8. The tester verifies that every markup implementation on the WCAG list is "better" than every implementation on the non-WCAG list in terms of size of control
    9. The tester verifies that every markup implementation on the WCAG list is "better" than every implementation on the non-WCAG list in terms of actions to activate

  6. Success Criteria 4.X: TBD

  7. Success Criteria 4.4: The mechanisms for accessibility prompting, checking, repair and documentation must be similar to comparable mechanisms in terms of the following characteristics:
    1. visual design (design metaphors, artistic sophistication, sizes, fonts, colors)
    2. operation (degree of automation, number of actions for activation)
    3. configurability (number and types of features)
    (NOTE: This may be in AAA below?)

    Test Plan

    1. Tool designer defines all mechanisms for accessibility prompting, checking, repair and documentation supported by the tool on a form
    2. Tool designer defines all comparable mechanisms to the above supported by the tool on a form, and why they are comparable
    3. Given this info, the tester (author) verifies that all mechanisms in the accessibility list have the same visual design as all items in the comparable mechanism list, and enters the results on a form
    4. Given this info, the tester (author) verifies that all mechanisms in the top list above have the same operation as all items in the comparable mechanism list, and enters the results on a form
    5. Given this info, the tester (author) verifies that all mechanisms in the top list above have the same configurabiity as all items in the comparable mechanism list, and enters the results on a form

    ---------------------

  8. Success Criteria 2.5: All markup strings written automatically by the tool (i.e., not authored "by hand") must satisfy all of the Level 2 success criteria (WCAG2.0 10/27/03 draft). (RELATIVE PRIORITY):

    Test Plan:

    1. The tool designer defines on a form how one automatically-generates markup strings using the tool
    2. Using this info, the author (tester) verifies each of the following items (or enters N/A as appropriate) on a form:
      1. The tester verifies that in such examples non-text content that cannot be expressed in words always has a text equivalent for all aspects that can be expressed in words
      2. The tester verifies that in such examples a text document that merge all audio descriptions and captions into a collated script is always provided.
      3. The tester verifies that in such examples captions and audio descriptiosn are always provided for all live broadcasts.
      4. The tester verifies that in such examples any information presented using color is always available without color and without having to interpret markup.
      5. The tester verifies that in such examples all abbreviations and acronyms are clearly identified each time they occur if they collide with a word in the standard language that would also logically appear in the same case
      6. The tester verifies that in such examples all symbols such as diacritic marks that are found in standard usage of the natural language of the content, and that are necessary for unambiguous identification of words, are always present or anoth standard mechanism for disambiguation is always provided.
      7. The tester verifies that in such examples all structural elements present have a different visual appearance or auditory characteristic from each other and from body text.
      8. The tester verifies that in such examples all text that is presneted over a background color or grayscale has a mechanism that allows the text to be presented in a fashion that has a "large" contrast between text and background color.
      9. The tester verifies that in such examples all audio content does not contain background sounds or the background sounds are at least 20db lower than the foreground audio content.
      10. The tester verifies that in such examples wherever a choice between event handlers is available and supported, the more abstract event is always used.
      11. The tester verifies that in such examples any blinking content can always be turned off.
      12. The tester verifies that in such examples any moving content can always be paused.
      13. The tester verifies that in such examples all animation or other content does not visibly or purposely flicker between 3 and 49Hz.
      14. The tester verifies that in such examples all content that might create a problem has been tested, and only pages with unavoidable flicker remain and appropriate warnings along with a close alternative presentation have been provided for these pages.
      15. The tester verifies that in such examples in all documents greater than 50000 words or all sites larger than 50 perceived pages, at least one of the following is provided: hierarchical structure markup, table of contents, and alternate display orders
      16. The tester verifies that in such examples users are always able to skip over large blocks of repetitive material, navigational bars or other blocks of links that are greater than 7 when reading with a synthesized or navigating using keyboard.
      17. The tester verifies that in such examples if an error is detected, feedback is always provided to the user identifying the error.
      18. The tester verifies that in such examples all acronyms and abbreviations do not appear first in standard unabridged dictionaries for the language, or are always defined the first time they appear, or are always available in a glossary on the site.
      19. The tester verifies that in such examples all content has been reviewed, taking into account the following strategies for evaluating the complexity of content, applying as appropriate: familiarity of terms and language structure, reasonableness of length and complexity of sentences, coherence of paragraphs (and sensibility in length), clarity of headings and linked text when read out of context, accuracy and uniqueness of page titles, care in the use of all-capital letters where normal sentences case might increase comprehension, inclusion of non-text content to supplement text for key pages or sections of the site where appropriate.
      20. The tester verifies that in such examples all key orientation adn navigational elements are generally found in one or two consistent locations or their locations are always otherwise predictable
      21. The tester verifies that in such examples where inconsistent or unpredictable responses are essential to the function of the content, the user is always warned in advance of encountering them
      22. The tester verifies that in such examples wherever there are extreme changes in context, one of the following is always true: an easy- to-find setting, that persists for the site visit, is provided fot he user to deactivate processes or features that cause extreme changes in context, or extreme changes in context are identified before they occur so they can be prepared for the change
      23. The tester verifies that in such examples for all markup, the markup has: passed validity tests of the language, structural elements and attributes are used as defined in the specification, accessbility features are used, and deprecated features are avoided
      24. The tester verifies that in such examples all accessibility conventions of the markup or programming language are always used
      25. The tester verifies that in such examples all relevant interfaces have been tested using a variety of assistive technologies (and preferably real people with disabilities) to determine that those assistive technologies are always able to acccess all information on the page or hidden within the page
      26. The tester verifies that in such examples all applicable Web resources include a list of the technologies users must have in order for its content to work as intended

  9. Success Criteria 2.6: Any web content (e.g., templates, clip art, multimedia objects, scripts, applets, example pages, etc.) preferentially licensed (i.e., better terms for users of tool then for other) for users of the tool, must satisfy all of the Level 2 success criteria of WCAG2.0 (10/27/03 draft)(RELATIVE PRIORITY):

    Test Plan:

    1. The tool designer defines all examples of preferentially-licensed content supported by the tool on a form
    2. The tool designer defines what preferential licensing means in the context of the tool, on a form
    3. Using this info, the author (tester) verifies each of the following items (or enters N/A as appropriate) on a form:
      1. The tester verifies that in such examples non-text content that cannot be expressed in words always has a text equivalent for all aspects that can be expressed in words
      2. The tester verifies that in such examples a text document that merge all audio descriptions and captions into a collated script is always provided.
      3. The tester verifies that in such examples captions and audio descriptiosn are always provided for all live broadcasts.
      4. The tester verifies that in such examples any information presented using color is always available without color and without having to interpret markup.
      5. The tester verifies that in such examples all abbreviations and acronyms are clearly identified each time they occur if they collide with a word in the standard language that would also logically appear in the same case
      6. The tester verifies that in such examples all symbols such as diacritic marks that are found in standard usage of the natural language of the content, and that are necessary for unambiguous identification of words, are always present or anoth standard mechanism for disambiguation is always provided.
      7. The tester verifies that in such examples all structural elements present have a different visual appearance or auditory characteristic from each other and from body text.
      8. The tester verifies that in such examples all text that is presneted over a background color or grayscale has a mechanism that allows the text to be presented in a fashion that has a "large" contrast between text and background color.
      9. The tester verifies that in such examples all audio content does not contain background sounds or the background sounds are at least 20db lower than the foreground audio content.
      10. The tester verifies that in such examples wherever a choice between event handlers is available and supported, the more abstract event is always used.
      11. The tester verifies that in such examples any blinking content can always be turned off.
      12. The tester verifies that in such examples any moving content can always be paused.
      13. The tester verifies that in such examples all animation or other content does not visibly or purposely flicker between 3 and 49Hz.
      14. The tester verifies that in such examples all content that might create a problem has been tested, and only pages with unavoidable flicker remain and appropriate warnings along with a close alternative presentation have been provided for these pages.
      15. The tester verifies that in such examples in all documents greater than 50000 words or all sites larger than 50 perceived pages, at least one of the following is provided: hierarchical structure markup, table of contents, and alternate display orders
      16. The tester verifies that in such examples users are always able to skip over large blocks of repetitive material, navigational bars or other blocks of links that are greater than 7 when reading with a synthesized or navigating using keyboard.
      17. The tester verifies that in such examples if an error is detected, feedback is always provided to the user identifying the error.
      18. The tester verifies that in such examples all acronyms and abbreviations do not appear first in standard unabridged dictionaries for the language, or are always defined the first time they appear, or are always available in a glossary on the site.
      19. The tester verifies that in such examples all content has been reviewed, taking into account the following strategies for evaluating the complexity of content, applying as appropriate: familiarity of terms and language structure, reasonableness of length and complexity of sentences, coherence of paragraphs (and sensibility in length), clarity of headings and linked text when read out of context, accuracy and uniqueness of page titles, care in the use of all-capital letters where normal sentences case might increase comprehension, inclusion of non-text content to supplement text for key pages or sections of the site where appropriate.
      20. The tester verifies that in such examples all key orientation adn navigational elements are generally found in one or two consistent locations or their locations are always otherwise predictable
      21. The tester verifies that in such examples where inconsistent or unpredictable responses are essential to the function of the content, the user is always warned in advance of encountering them
      22. The tester verifies that in such examples wherever there are extreme changes in context, one of the following is always true: an easy- to-find setting, that persists for the site visit, is provided fot he user to deactivate processes or features that cause extreme changes in context, or extreme changes in context are identified before they occur so they can be prepared for the change
      23. The tester verifies that in such examples for all markup, the markup has: passed validity tests of the language, structural elements and attributes are used as defined in the specification, accessbility features are used, and deprecated features are avoided
      24. The tester verifies that in such examples all accessibility conventions of the markup or programming language are always used
      25. The tester verifies that in such examples all relevant interfaces have been tested using a variety of assistive technologies (and preferably real people with disabilities) to determine that those assistive technologies are always able to acccess all information on the page or hidden within the page
      26. The tester verifies that in such examples all applicable Web resources include a list of the technologies users must have in order for its content to work as intended

  10. Success Criteria 3.1:
    1. When the actions of the author risk creating accessibility problems according to any of the Level 2 success criteria of WCAG2.0 (10/27/03 draft), the tool must intervene to introduce the appropriate accessible authoring practice. This intervention may proceed according to a user-configurable schedule.
    2. The intervention must occur at least once before ocmpletion of authoring (e.g., final save, publishing, etc.)(RELATIVE PRIORITY)

    Test Plan:

    1. The tool designer defines all author-initiated actions supported by the tool on a form
    2. The tool designer describes how accessibility problems are detected using the tool on a form
    3. The tool designer describes how the tool intervenes given the preceding information, on a form
    4. Given this info, the author (tester) verifies each following item (or enters N/A as appropriate) on a form:
      1. The tester verifies that if as a result of such actions non-text content that cannot be expressed in words does not always have a text equivalent for all aspects that can be expressed in words, the tool always produces an acceptable alternative
      2. The tester verifies that if as a result of such actions a text document that merges all audio descriptions and captions into a collated script is not always provided, the tool always produces an acceptable alternative.
      3. The tester verifies that if as a result of such actions captions and audio descriptiosn are not always provided for all live broadcasts, the tool always produces an acceptable alternative.
      4. The tester verifies that if as a result of such actions any information presented using color is not always available without color and without having to interpret markup, the tool always produces an acceptable alternative.
      5. The tester verifies that if as a result of such actions all abbreviations and acronyms are not clearly identified each time they occur if they collide with a word in the standard language that would also logically appear in the same case, the tool always produces an acceptable alternative
      6. The tester verifies that if as a result of such actions all symbols such as diacritic marks that are found in standard usage of the natural language of the content, and that are necessary for unambiguous identification of words, are not always present or another standard mechanism for disambiguation is not always provided, the tool always produces an acceptable alternative.
      7. The tester verifies that if as a result of such actions all structural elements present do not have a different visual appearance or auditory characteristic from each other and from body text, the tool always produces an acceptable alternative.
      8. The tester verifies that if as a result of such actions all text that is presented over a background color or grayscale does not have a mechanism that allows the text to be presented in a fashion that has a "large" contrast between text and background color, the tool always produces an acceptable alternative
      9. The tester verifies that if as a result of such actions all audio content contains background sounds or the background sounds are not at least 20db lower than the foreground audio content, the tool always produces an acceptable alternative.
      10. The tester verifies that if as a result of such actions wherever a choice between event handlers is available and supported, the more abstract event is not always used, the tool always produces an acceptable alternative.
      11. The tester verifies that if as a result of such actions any blinking content can not always be turned off, the tool always produces an acceptable alternative.
      12. The tester verifies that if as a result of such actions any moving content can not always be paused, the tool always produces an acceptable alternative.
      13. The tester verifies that if as a result of such actions all animation or other content visibly or purposely flickers between 3 and 49Hz, the tool always produces an acceptable alternative.
      14. The tester verifies that if as a result of such actions all content that might create a problem has not been tested, and only pages with unavoidable flicker remain and appropriate warnings along with a close alternative presentation have not been provided for these pages, the tool always produces an acceptable alternative.
      15. The tester verifies that if as a result of such actions in all documents greater than 50000 words or all sites larger than 50 perceived pages, all of the following are not provided: hierarchical structure markup, table of contents, and alternate display orders, the tool always produces an acceptable alternative
      16. The tester verifies that if as a result of such actions users are not always able to skip over large blocks of repetitive material, navigational bars or other blocks of links that are greater than 7 when reading with a synthesized or navigating using keyboard, the tool always produces an acceptable alternative.
      17. The tester verifies that if as a result of such actions if an error is detected, feedback is not always provided to the user identifying the error, the tool always produces an acceptable alternative.
      18. The tester verifies that if as a result of such actions all acronyms and abbreviations appear first in standard unabridged dictionaries for the language, or are not always defined the first time they appear, or are not always available in a glossary on the site, the tool always produces an acceptable alternative.
      19. The tester verifies that if as a result of such actions all content has not been reviewed, taking into account the following strategies for evaluating the complexity of content, applying as appropriate: familiarity of terms and language structure, reasonableness of length and complexity of sentences, coherence of paragraphs (and sensibility in length), clarity of headings and linked text when read out of context, accuracy and uniqueness of page titles, care in the use of all-capital letters where normal sentences case might increase, the tool always produces an acceptable alternative comprehension, inclusion of non-text content to supplement text for key pages or sections of the site where appropriate.
      20. The tester verifies that if as a result of such actions all key orientation adn navigational elements arenot generally found in one or two consistent locations or their locations are always otherwise predictable, the tool always produces an acceptable alternative
      21. The tester verifies that if as a result of such actions where inconsistent or unpredictable responses are essential to the function of the content, the user is not always warned in advance of encountering them, the tool always produces an acceptable alternative
      22. The tester verifies that if as a result of such actions wherever there are extreme changes in context, none of the following are true: an easy- to-find setting, that persists for the site visit, is provided fot he user to deactivate processes or features that cause extreme changes in context, or extreme changes in context are identified before they occur so they can be prepared for the change, the tool always produces an acceptable alternative
      23. The tester verifies that if as a result of such actions for all markup, the markup has not: passed validity tests of the language, structural elements and attributes are used as defined in the specification, accessbility features are used, and deprecated features are avoided, the tool always produces an acceptable alternative
      24. The tester verifies that if as a result of such actions all accessibility conventions of the markup or programming language are not always used, the tool always produces an acceptable alternative
      25. The tester verifies that if as a result of such actions all relevant interfaces have not been tested using a variety of assistive technologies (and preferably real people with disabilities) to determine that those assistive technologies are always able to acccess all information on the page or hidden within the page, the tool always produces an acceptable alternative
      26. The tester verifies that if as a result of such actions all applicable Web resources do not include a list of the technologies users must have in order for its content to work as intended, the tool always produces an acceptable alternative

  11. Success Criteria 3.2: The tool must provide a check (automated check, semi-automated check or manual check) for detecting violations of each Level 2 success criteria of WCAG2 (10/27/03 draft)(RELATIVE PRIORITY)

    Test Plan:

    1. The tool designer defines how checks (for detecting accessibility violations) are provided by the tool on a form
    2. The tool designer describes each check (for detecting accessibility violations), and what kind it is, on a form
    3. Given this info, the author (tester) verifies each following item (or enters N/A as appropriate) on a form:
      1. The tester verifies that if non-text content that cannot be expressed in words does not always have a text equivalent for all aspects that can be expressed in words, the tool always produces an acceptable check
      2. The tester verifies that if a text document that merges all audio descriptions and captions into a collated script is not always provided, the tool always produces an acceptable check.
      3. The tester verifies that if captions and audio descriptions are not always provided for all live broadcasts, the tool always produces an acceptable check.
      4. The tester verifies that if any information presented using color is not always available without color and without having to interpret markup, the tool always produces an acceptable check.
      5. The tester verifies that if all abbreviations and acronyms are not clearly identified each time they occur if they collide with a word in the standard language that would also logically appear in the same case, the tool always produces an acceptable check
      6. The tester verifies that if all symbols such as diacritic marks that are found in standard usage of the natural language of the content, and that are necessary for unambiguous identification of words, are not always present or another standard mechanism for disambiguation is not always provided, the tool always produces an acceptable check.
      7. The tester verifies that if all structural elements present do not have a different visual appearance or auditory characteristic from each other and from body text, the tool always produces an acceptable check.
      8. The tester verifies that if all text that is presented over a background color or grayscale does not have a mechanism that allows the text to be presented in a fashion that has a "large" contrast between text and background color, the tool always produces an acceptable check
      9. The tester verifies that if all audio content contains background sounds or the background sounds are not at least 20db lower than the foreground audio content, the tool always produces an acceptable check.
      10. The tester verifies that if wherever a choice between event handlers is available and supported, the more abstract event is not always used, the tool always produces an acceptable check.
      11. The tester verifies that if any blinking content can not always be turned off, the tool always produces an acceptable check.
      12. The tester verifies that if any moving content can not always be paused, the tool always produces an acceptable check.
      13. The tester verifies that if all animation or other content visibly or purposely flickers between 3 and 49Hz, the tool always produces an acceptable check.
      14. The tester verifies that if all content that might create a problem has not been tested, and only pages with unavoidable flicker remain and appropriate warnings along with a close alternative presentation have not been provided for these pages, the tool always produces an acceptable check.
      15. The tester verifies that if in all documents greater than 50000 words or all sites larger than 50 perceived pages, all of the following are not provided: hierarchical structure markup, table of contents, and alternate display orders, the tool always produces an acceptable check
      16. The tester verifies that if users are not always able to skip over large blocks of repetitive material, navigational bars or other blocks of links that are greater than 7 when reading with a synthesized or navigating using keyboard, the tool always produces an acceptable check.
      17. The tester verifies that if an error is detected and feedback is not always provided to the user identifying the error, the tool always produces an acceptable check.
      18. The tester verifies that if all acronyms and abbreviations appear first in standard unabridged dictionaries for the language, or are not always defined the first time they appear, or are not always available in a glossary on the site, the tool always produces an acceptable check.
      19. The tester verifies that if all content has not been reviewed, taking into account the following strategies for evaluating the complexity of content, applying as appropriate: familiarity of terms and language structure, reasonableness of length and complexity of sentences, coherence of paragraphs (and sensibility in length), clarity of headings and linked text when read out of context, accuracy and uniqueness of page titles, care in the use of all-capital letters where normal sentences case might increase, comprehension, inclusion of non-text content to supplement text for key pages or sections of the site where appropriate, the tool always produces an acceptable check
      20. The tester verifies that if all key orientation and navigational elements are not generally found in one or two consistent locations or their locations are always not otherwise predictable, the tool always produces an acceptable check
      21. The tester verifies that if where inconsistent or unpredictable responses are essential to the function of the content and the user is not always warned in advance of encountering them, the tool always produces an acceptable check
      22. The tester verifies that if wherever there are extreme changes in context, none of the following are true: an easy- to-find setting, that persists for the site visit, is provided fot he user to deactivate processes or features that cause extreme changes in context, or extreme changes in context are identified before they occur so they can be prepared for the change, the tool always produces an acceptable check
      23. The tester verifies that if for all markup, the markup has not: passed validity tests of the language, structural elements and attributes are used as defined in the specification, accessbility features are used, and deprecated features are avoided, the tool always produces an acceptable check
      24. The tester verifies that if all accessibility conventions of the markup or programming language are not always used, the tool always produces an acceptable check
      25. The tester verifies that if all relevant interfaces have not been tested using a variety of assistive technologies (and preferably real people with disabilities) to determine that those assistive technologies are always able to acccess all information on the page or hidden within the page, the tool always produces an acceptable check
      26. The tester verifies that if all applicable Web resources do not include a list of the technologies users must have in order for its content to work as intended, the tool always produces an acceptable check

  12. Success Criteria 3.3: The tool must provide a repair (automated repair, semi-automated repair or manual repair) for correcting violations of each Level 2 success criteria of WCAG2.0 (10/28/03 draft)(RELATIVE PRIORITY):

    Test Plan:

    1. The tool designer defines how repairs (for detecting accessibility violations) are provided by the tool on a form
    2. The tool designer describes each repair (for detecting accessibility violations), and what kind it is, on a form
    3. Given this info, the author (tester) verifies each following item (or enters N/A as appropriate) on a form:
      1. The tester verifies that if non-text content that cannot be expressed in words does not always have a text equivalent for all aspects that can be expressed in words, the tool always produces an acceptable repair
      2. The tester verifies that if a text document that merges all audio descriptions and captions into a collated script is not always provided, the tool always produces an acceptable repair.
      3. The tester verifies that if captions and audio descriptions are not always provided for all live broadcasts, the tool always produces an acceptable repair.
      4. The tester verifies that if any information presented using color is not always available without color and without having to interpret markup, the tool always produces an acceptable repair.
      5. The tester verifies that if all abbreviations and acronyms are not clearly identified each time they occur if they collide with a word in the standard language that would also logically appear in the same case, the tool always produces an acceptable repair
      6. The tester verifies that if all symbols such as diacritic marks that are found in standard usage of the natural language of the content, and that are necessary for unambiguous identification of words, are not always present or another standard mechanism for disambiguation is not always provided, the tool always produces an acceptable repair.
      7. The tester verifies that if all structural elements present do not have a different visual appearance or auditory characteristic from each other and from body text, the tool always produces an acceptable repair.
      8. The tester verifies that if all text that is presented over a background color or grayscale does not have a mechanism that allows the text to be presented in a fashion that has a "large" contrast between text and background color, the tool always produces an acceptable repair
      9. The tester verifies that if all audio content contains background sounds or the background sounds are not at least 20db lower than the foreground audio content, the tool always produces an acceptable repair.
      10. The tester verifies that if wherever a choice between event handlers is available and supported, the more abstract event is not always used, the tool always produces an acceptable repair.
      11. The tester verifies that if any blinking content can not always be turned off, the tool always produces an acceptable repair.
      12. The tester verifies that if any moving content can not always be paused, the tool always produces an acceptable repair.
      13. The tester verifies that if all animation or other content visibly or purposely flickers between 3 and 49Hz, the tool always produces an acceptable repair.
      14. The tester verifies that if all content that might create a problem has not been tested, and only pages with unavoidable flicker remain and appropriate warnings along with a close alternative presentation have not been provided for these pages, the tool always produces an acceptable repair.
      15. The tester verifies that if in all documents greater than 50000 words or all sites larger than 50 perceived pages, all of the following are not provided: hierarchical structure markup, table of contents, and alternate display orders, the tool always produces an acceptable repair
      16. The tester verifies that if users are not always able to skip over large blocks of repetitive material, navigational bars or other blocks of links that are greater than 7 when reading with a synthesized or navigating using keyboard, the tool always produces an acceptable repair.
      17. The tester verifies that if an error is detected and feedback is not always provided to the user identifying the error, the tool always produces an acceptable repair.
      18. The tester verifies that if all acronyms and abbreviations appear first in standard unabridged dictionaries for the language, or are not always defined the first time they appear, or are not always available in a glossary on the site, the tool always produces an acceptable repair.
      19. The tester verifies that if all content has not been reviewed, taking into account the following strategies for evaluating the complexity of content, applying as appropriate: familiarity of terms and language structure, reasonableness of length and complexity of sentences, coherence of paragraphs (and sensibility in length), clarity of headings and linked text when read out of context, accuracy and uniqueness of page titles, care in the use of all-capital letters where normal sentences case might increase, comprehension, inclusion of non-text content to supplement text for key pages or sections of the site where appropriate, the tool always produces an acceptable repair
      20. The tester verifies that if all key orientation and navigational elements are not generally found in one or two consistent locations or their locations are always not otherwise predictable, the tool always produces an acceptable repair
      21. The tester verifies that if where inconsistent or unpredictable responses are essential to the function of the content and the user is not always warned in advance of encountering them, the tool always produces an acceptable repair
      22. The tester verifies that if wherever there are extreme changes in context, none of the following are true: an easy- to-find setting, that persists for the site visit, is provided fot he user to deactivate processes or features that cause extreme changes in context, or extreme changes in context are identified before they occur so they can be prepared for the change, the tool always produces an acceptable repair
      23. The tester verifies that if for all markup, the markup has not: passed validity tests of the language, structural elements and attributes are used as defined in the specification, accessbility features are used, and deprecated features are avoided, the tool always produces an acceptable repair
      24. The tester verifies that if all accessibility conventions of the markup or programming language are not always used, the tool always produces an acceptable repair
      25. The tester verifies that if all relevant interfaces have not been tested using a variety of assistive technologies (and preferably real people with disabilities) to determine that those assistive technologies are always able to acccess all information on the page or hidden within the page, the tool always produces an acceptable repair
      26. The tester verifies that if all applicable Web resources do not include a list of the technologies users must have in order for its content to work as intended, the tool always produces an acceptable repair

    ----------------------------------------------------------------------------------------------------

    Conformance Level AAA (after doing A and AA)

    1. Success Criteria 1.1: The authoring interface must pass recommended elements of the Software Accessibility Guidelines testing criteria

      Test Plan

      1. The tool designer describes the authoring interface(s) supported by the tool on a form
      2. The author (tester) describes which software accessibility guidelines are being used for verification, on a form
      3. The author (tester) describes which elements of these guidelines are recommended, on a form
      4. Given the above, for each authoring interface, tester verifies that all elements of the interface specified above pass each recommended element of the applicable Software Accessibility Guidelines testing criteria, and enters the results on a form

    2. Success Criteria 3.5: When non-text objects have been previously inserted using the tool, the tool must suggest any previously authored textual equivalents for that non-text object

      Test Plan

      1. tool designer describes how non-text objects can be inserted using the tool on a form
      2. tool designer details the kinds of non-text objects that can be inserted (and which non-text objects have been inserted previously as well as when?) on a form
      3. tool designer defines how the tool generates previously authored textual equivalents (and how the tool associates these with specific non-text objects) on a form
      4. tool designer describes how the tool prompts the author as mentioned previously, on a form
      5. Given the previous information, author (tester) verifies that in fact the tool successfully prompts the author as appropriate when there is a "match"; the results are entered on a form
      6. Given the previous information, if author (tester) accepts what the prompt suggests, the author verifies that the tool successfully inserts the proper text equivalent for the correct non-text object every time; the results are entered on a form
      7. Given the previous information, if author (tester) declines what the prompt suggests, the author verifies that the tool does not insert a text equivalent in any case; the results are entered on a form

    3. Success Criteria 3.6: The tool must provide the author with an option to view a listing of all current accessibility problems.

      Test Plan

      1. The tool designer defines how the tool detects accessibility problems, what kind of accessibility problems are detected, and how they are reported, on a form
      2. The tool designer describes how a listing of accessibility problems is presented to the author, on a form
      3. Given the previous info, the tester (author) verifies that the tool successfully informs the author of an option to list all of known accessibility problems; results are entered on a form
      4. Given the previous info, if the author accepts the option, the author verifies that the entire list is made available to the author every time; results are entered on the form
      5. Given the previous info, the author verifies that every entry in the list in fact is an accessibility problem (and links back to the actual problem); results are entered on a form
      6. Given the previous info, if the author refuses the option, the author verifies that the list is not made available to the author; results are entered on a form

    4. Success Criteria 3.9:
      1. The documentation must contain suggested content creation workflow descriptions that include how and when to use the accessibility-related features of the tool
      2. For tools that lack a particular accessibility-related feature, the workflow description must include a workaround for that feature

      Test Plan

      1. tool designer describes all accessibility-related features of the tool (and how to use each) on a form
      2. tool designer describes documentation of tool on a form
      3. tool designer describes what content workflow descriptions are supported by the tool, how they are suggested, and how they are integrated into the documentation, on a form
      4. Given the previous info, author (tester) verifies that documentation for tool does in fact contain content creation workflow description; results are entered on a form
      5. Given the previous info, author (tester) verifies that each such content creation workflow description does in fact describe how and when to use each accessbility-related feature; results are entered on a form
      6. tool designer defines all accessibility-related features missing from the tool on a form
      7. Given the previous info, author (tester) verifies that for each such missing feature a content creation workflow description in the documentation in fact contains a successful workaround for that feature ; results are entered on a form
      8. Given the previouw info, author (tester) verifies that each such workaround does in fact work?; results are entered on a form?

      ------------------

    5. Success Criteria 2.5: All markup strings written automatically by the tool (i.e., not authored "by hand") must conform to at least one of the Level 3 success criteria of WCAG2 (10/27/03 draft)(RELATIVE PRIORITY):

      Test Plan

      1. tool designer explains how markup strings are written automatically by the tool (on form)
      2. tool designer defines which kinds of markup strings can be written by the tool (on a form)
      3. Given this information, author (tester) enters on form which of following items are satisfied (NOTE: must be at least one satisfied):
        1. Tester verifies that the presentation of the markup does not require the user to read captions and the visual presentation simultaneously in order to understand the content
        2. Tester verifies that the structural emphases of the markup are chosen to be distinct on different major visual display types
        3. Tester verifies that the content of the markup is constructed such that users can control the presentation of structural elements or the structure of the markup can be varied through alternate presentation formats
        4. Tester verifies that for markup when text content is presented over a background image or pattern, the text is easily readable when the page is viewed in 256 grayscale
        5. Tester verifies that for markup when text content is presented over a background image or pattern, the text is easily readable in default presentation mode
        6. Tester verifies that for markup there are no time limits as a part of a competitive activity
        7. Tester verifies that for markup the content has been reviewed, taking into account the following strategies for facilitating orientation and movement, applying as appropriate: breaking up text into logical paragraphs, providing hierarchical sections and titles, particularly for longer documents, revealing important non-hierarchical relationships, and dividing very large works into sections/chapters with logical labels
        8. Tester verifies that for markup information is provided that would allow an assistive technology to determine at least one logical, linear reading order
        9. Tester verifies that for markup diagrams are constructed in a fashion so that they have structure that can be accessed by the user
        10. Tester verifies that for markup where possible, logical tab order has been created
        11. Tester verifies that for markup where possible, the user is allowed to select from a list of optiosn as well as to generate input text directly
        12. Tester verifies that for markup errors are identified specifically and suggestions for correction are provided where possible
        13. Tester verifies that for markup checks for misspelled words are applied and correct spellings are suggested when text entry is required
        14. Tester verifies that for markup where consequences are significant and time-response is not important, one of the following is true: (a) actions are reversible, (b) where not reversible, actions are checked for errors in advance, (c) where not reversible, and not checkable, a confirmation is asked before acceptance
        15. Tester verifies that for markup a list is provided on the home page of URIs to cascading dictionaries that can or should be used to define abbreviations or acronyms
        16. Tester verifies that for markup the content has been reviewed, taking into account the following strategies for determining the definition of abbreviations and acronyms, applying them as appropriate: (a) provide a definition or link (with the first occurrence) of phrases, words, acronyms, and abbreviations specific to a particular community, (b) provide a summary for relationships that may not be obvious from analyzing the structure of a table but that may be apparent in a visual rendering of the table, (c) if contracted forms of words are used such that they are ambiguous, provide semantic markup to make words unique and interpretable
        17. Tester verifies that for markup the content has been reviewed, taking into account the strategies for evaluating the complexity of content, applying as appropriate.
        18. Tester verifies that for markup a user can select a different location for navigation elements in the layout of the page
        19. Tester verifies that for markup the content has been reviewed, taking into account common ideas fro making content consistent and predictable, applying as appropriate
        20. Tester verifies that for markup a list of technologies and features, support for which is required in order for the content to be operable, has been determined and is documented in metadata and/or a policy statement associated with the content,
        21. Tester verifies that for markup technologies and features on the required list are available in at least two independently-developed implementations

    6. Success Criteria 2.6: Any web content (e.g., templates, clip art, multimedia objects, scripts, applets, example pages, etc.) preferentially licensed (i.e., better terms for users of tool then for other) for users of the tool, must satisfy at least one of the Level 3 success criteria of WCAG2.0 (10/27/03 draft) (RELATIVE PRIORITY):

      Test Plan

      1. tool designer defines all kinds of web content which are preferentially licensed associated with tool (on form)
      2. tool designer defines what preferential licensing means in context of the tool (on form)
      3. Given this info, author (tester) verifies at least one of following items, and enters results on form:
        1. Tester verifies that the presentation of such content does not require the user to read captions and the visual presentation simultaneously in order to understand the content
        2. Tester verifies that the structural emphases of such content are chosen to be distinct on different major visual display types
        3. Tester verifies that the content of such content is constructed such that users can control the presentation of structural elements or the structure of such content can be varied through alternate presentation formats
        4. Tester verifies that for such content when text content is presented over a background image or pattern, the text is easily readable when the page is viewed in 256 grayscale
        5. Tester verifies that for such content when text content is presented over a background image or pattern, the text is easily readable in default presentation mode
        6. Tester verifies that for such content there are no time limits as a part of a competitive activity
        7. Tester verifies that for such content the content has been reviewed, taking into account the following strategies for facilitating orientation and movement, applying as appropriate: breaking up text into logical paragraphs, providing hierarchical sections and titles, particularly for longer documents, revealing important non-hierarchical relationships, and dividing very large works into sections/chapters with logical labels
        8. Tester verifies that for such content information is provided that would allow an assistive technology to determine at least one logical, linear reading order
        9. Tester verifies that for such content diagrams are constructed in a fashion so that they have structure that can be accessed by the user
        10. Tester verifies that for such content where possible, logical tab order has been created
        11. Tester verifies that for such content where possible, the user is allowed to select from a list of optiosn as well as to generate input text directly
        12. Tester verifies that for such content errors are identified specifically and suggestions for correction are provided where possible
        13. Tester verifies that for such content checks for misspelled words are applied and correct spellings are suggested when text entry is required
        14. Tester verifies that for such contnet where consequences are significant and time-response is not important, one of the following is true: (a) actions are reversible, (b) where not reversible, actions are checked for errors in advance, (c) where not reversible, and not checkable, a confirmation is asked before acceptance
        15. Tester verifies that for such content a list is provided on the home page of URIs to cascading dictionaries that can or should be used to define abbreviations or acronyms
        16. Tester verifies that for such content the content has been reviewed, taking into account the following strategies for determining the definition of abbreviations and acronyms, applying them as appropriate: (a) provide a definition or link (with the first occurrence) of phrases, words, acronyms, and abbreviations specific to a particular community, (b) provide a summary for relationships that may not be obvious from analyzing the structure of a table but that may be apparent in a visual rendering of the table, (c) if contracted forms of words are used such that they are ambiguous, provide semantic markup to make words unique and interpretable
        17. Tester verifies that for such content the content has been reviewed, taking into account the strategies for evaluating the complexity of content, applying as appropriate.
        18. Tester verifies that for such content a user can select a different location for navigation elements in the layout of the page
        19. Tester verifies that for such content the content has been reviewed, taking into account common ideas fro making content consistent and predictable, applying as appropriate
        20. Tester verifies that for such content a list of technologies and features, support for which is required in order for the content to be operable, has been determined and is documented in metadata and/or a policy statement associated with the content,
        21. Tester verifies that for such content technologies and features on the required list are available in at least two independently-developed implementations

    7. Success Criteria 3.1:
      1. When the actions of the author risk creating accessibility problems according to the Level 3 success criteria of WCAG2.0 (10/27/03 draft) the tool must intervene to introduce the appropriate accessible authoring practice. This intervention may proceed according to a user-configurable schedule.
      2. The intervention must occur at least once before ocmpletion of authoring (e.g., final save, publishing, etc.)
      (RELATIVE PRIORITY):

      Test Plan

      1. tool designer describes kinds of accessibility problems that can be detected using tool (on form)
      2. tool designer describes how tool intervenes and notifies author (on form)
      3. tool designer describes how author can configure schedule (on form)
      4. author (tester) configures intervention schedule using previous info; note is made on form
      5. Given this info, author (tester) verifies each following item (or N/A as appropriate) and enters results on form (x is a small number, and is assumed to be before completion of authoring):
        1. Tester verifies that if it is determined that the presentation requires the user to read captions and the visual presentation simultaneously in order to understand the content, the tool successfully intervenes within x seconds of the determination.
        2. Tester verifies that if it is determined that the structural emphases are not chosen to be distinct on different major visual display types, the tool successfully intervenes within x seconds of the determination.
        3. Tester verifies that if it is determined that the content is not constructed such that users can control the presentation of structural elements or the structure of such content can not be varied through alternate presentation formats, the tool successfully intervenes within x seconds of the determination.
        4. Tester verifies that if it is determined that when text content is presented over a background image or pattern, the text is not easily readable when the page is viewed in 256 grayscale, the tool successfully intervenes within x seconds of the determination.
        5. Tester verifies that if it is determined that when text content is presented over a background image or pattern, the text is not easily readable in default presentation mode, the tool successfully intervenes within x seconds of the determination.
        6. Tester verifies that if it is determined that there are time limits as a part of a competitive activity, the tool successfully intervenes within x seconds of the determination.
        7. Tester verifies that if it is determined that the content has not been reviewed, taking into account the following strategies for facilitating orientation and movement, applying as appropriate: breaking up text into logical paragraphs, providing hierarchical sections and titles, particularly for longer documents, revealing important non-hierarchical relationships, and dividing very large works into sections/chapters with logical labels, the tool successfully intervenes within x seconds of the determination.
        8. Tester verifies that if it is determined that information is not provided that would allow an assistive technology to determine at least one logical, linear reading order, the tool successfully intervenes within x seconds of the determination.
        9. Tester verifies that if it is determined that diagrams are not constructed in a fashion so that they have structure that can be accessed by the user, the tool successfully intervenes within x seconds of the determination.
        10. Tester verifies that if it is determined that where possible, logical tab order has not been created, the tool successfully intervenes within x seconds of the determination.
        11. Tester verifies that if it is determined that where possible, the user is not allowed to select from a list of options as well as to generate input text directly, the tool successfully intervenes within x seconds of the determination.
        12. Tester verifies that if it is determined that errors are not identified specifically and suggestions for correction are not provided where possible, the tool successfully intervenes within x seconds of the determination.
        13. Tester verifies that if it is determined that checks for misspelled words are not applied and correct spellings are not suggested when text entry is required, the tool successfully intervenes within x seconds of the determination.
        14. Tester verifies that if it is determined that where consequences are significant and time-response is not important, none of the following is true: (a) actions are reversible, (b) where not reversible, actions are checked for errors in advance, (c) where not reversible, and not checkable, a confirmation is asked before acceptance, the tool successfully intervenes within x seconds of the determination.
        15. Tester verifies that if it is determined that a list is not provided on the home page of URIs to cascading dictionaries that can or should be used to define abbreviations or acronyms, the tool successfully intervenes within x seconds of the determination.
        16. Tester verifies that if it is determined that the content has not been reviewed, taking into account the following strategies for determining the definition of abbreviations and acronyms, applying them as appropriate: (a) provide a definition or link (with the first occurrence) of phrases, words, acronyms, and abbreviations specific to a particular community, (b) provide a summary for relationships that may not be obvious from analyzing the structure of a table but that may be apparent in a visual rendering of the table, (c) if contracted forms of words are used such that they are ambiguous, provide semantic markup to make words unique and interpretable, the tool successfully intervenes within x seconds of the determination.
        17. Tester verifies that if it is determined that the content has not been reviewed, taking into account the strategies for evaluating the complexity of content, applying as appropriate, the tool successfully intervenes within x seconds of the determination.
        18. Tester verifies that if it is determined that a user can not select a different location for navigation elements in the layout of the page, the tool successfully intervenes within x seconds of the determination.
        19. Tester verifies that if it is determined that the content has not been reviewed, taking into account common ideas fro making content consistent and predictable, applying as appropriate, the tool successfully intervenes within x seconds of the determination.
        20. Tester verifies that if it is determined that a list of technologies and features, support for which is required in order for the content to be operable, has not been determined and is not documented in metadata and/or a policy statement associated with the content, applying as appropriate, the tool successfully intervenes within x seconds of the determination.
        21. Tester verifies that if it is determined that technologies and features on the required list are not available in at least two independently-developed implementations, the tool successfully intervenes within x seconds of the determination.

    8. Success Criteria 3.2: The tool must provide a check (automated check, semi-automated check or manual check) for detecting violations of each Level 3 success criteria of WCAG2.0 (10/27/03 draft)(RELATIVE PRIORITY):

      Test Plan

      1. The tool designer defines how checks (for detecting accessibility violations) are provided by the tool on a form
      2. The tool designer describes each check (for detecting accessibility violations), and what kind it is, on a form
      3. Given this info, the author (tester) verifies each following item (or enters N/A as appropriate) on a form:
        1. Tester verifies that if it is determined that the presentation requires the user to read captions and the visual presentation simultaneously in order to understand the content, the tool successfully provides a check within x seconds of the determination.
        2. Tester verifies that if it is determined that the structural emphases are not chosen to be distinct on different major visual display types, the tool successfully provides a check within x seconds of the determination.
        3. Tester verifies that if it is determined that the content is not constructed such that users can control the presentation of structural elements or the structure of such content can not be varied through alternate presentation formats, the tool successfully provides a check within x seconds of the determination.
        4. Tester verifies that if it is determined that when text content is presented over a background image or pattern, the text is not easily readable when the page is viewed in 256 grayscale, the tool successfully provides a check within x seconds of the determination.
        5. Tester verifies that if it is determined that when text content is presented over a background image or pattern, the text is not easily readable in default presentation mode, the tool successfully provides a check within x seconds of the determination.
        6. Tester verifies that if it is determined that there are time limits as a part of a competitive activity, the tool successfully provides a check within x seconds of the determination.
        7. Tester verifies that if it is determined that the content has not been reviewed, taking into account the following strategies for facilitating orientation and movement, applying as appropriate: breaking up text into logical paragraphs, providing hierarchical sections and titles, particularly for longer documents, revealing important non-hierarchical relationships, and dividing very large works into sections/chapters with logical labels, the tool successfully provides a check within x seconds of the determination.
        8. Tester verifies that if it is determined that information is not provided that would allow an assistive technology to determine at least one logical, linear reading order, the tool successfully provides a check within x seconds of the determination.
        9. Tester verifies that if it is determined that diagrams are not constructed in a fashion so that they have structure that can be accessed by the user, the tool successfully intervenes within x seconds of the determination.
        10. Tester verifies that if it is determined that where possible, logical tab order has not been created, the tool successfully provides a check within x seconds of the determination.
        11. Tester verifies that if it is determined that where possible, the user is not allowed to select from a list of options as well as to generate input text directly, the tool successfully provides a check within x seconds of the determination.
        12. Tester verifies that if it is determined that errors are not identified specifically and suggestions for correction are not provided where possible, the tool successfully provides a check within x seconds of the determination.
        13. Tester verifies that if it is determined that checks for misspelled words are not applied and correct spellings are not suggested when text entry is required, the tool successfully provides a check within x seconds of the determination.
        14. Tester verifies that if it is determined that where consequences are significant and time-response is not important, none of the following is true: (a) actions are reversible, (b) where not reversible, actions are checked for errors in advance, (c) where not reversible, and not checkable, a confirmation is asked before acceptance, the tool successfully provides a check within x seconds of the determination.
        15. Tester verifies that if it is determined that a list is not provided on the home page of URIs to cascading dictionaries that can or should be used to define abbreviations or acronyms, the tool successfully provides a check within x seconds of the determination.
        16. Tester verifies that if it is determined that the content has not been reviewed, taking into account the following strategies for determining the definition of abbreviations and acronyms, applying them as appropriate: (a) provide a definition or link (with the first occurrence) of phrases, words, acronyms, and abbreviations specific to a particular community, (b) provide a summary for relationships that may not be obvious from analyzing the structure of a table but that may be apparent in a visual rendering of the table, (c) if contracted forms of words are used such that they are ambiguous, provide semantic markup to make words unique and interpretable, the tool successfully provides a check within x seconds of the determination.
        17. Tester verifies that if it is determined that the content has not been reviewed, taking into account the strategies for evaluating the complexity of content, applying as appropriate, the tool successfully provides a check within x seconds of the determination.
        18. Tester verifies that if it is determined that a user can not select a different location for navigation elements in the layout of the page, the tool successfully provides a check within x seconds of the determination.
        19. Tester verifies that if it is determined that the content has not been reviewed, taking into account common ideas from making content consistent and predictable, applying as appropriate, the tool successfully provides a check within x seconds of the determination.
        20. Tester verifies that if it is determined that a list of technologies and features, support for which is required in order for the content to be operable, has not been determined and is not documented in metadata and/or a policy statement associated with the content, applying as appropriate, the tool successfully provides a check with x seconds of the determination.
        21. Tester verifies that if it is determined that technologies and features on the required list are not available in at least two independently-developed implementations, the tool successfully provides a check within x seconds of the determination.

    9. Success Criteria 3.3: The tool must provide a repair (automated repair, semi-automated repair or manual repair) for correcting violations of each Level 3 requirement of WCAG2.0 (10/27/03 draft)(RELATIVE PRIORITY):

      Test Plan

      1. The tool designer defines how repairs (for detecting accessibility violations) are provided by the tool on a form
      2. The tool designer describes each repair (for detecting accessibility violations), and what kind it is, on a form
      3. Given this info, the author (tester) verifies each following item (or enters N/A as appropriate) on a form:
        1. Tester verifies that if it is determined that the presentation requires the user to read captions and the visual presentation simultaneously in order to understand the content, the tool successfully provides a repair within x seconds of the determination.
        2. Tester verifies that if it is determined that the structural emphases are not chosen to be distinct on different major visual display types, the tool successfully provides a repair within x seconds of the determination.
        3. Tester verifies that if it is determined that the content is not constructed such that users can control the presentation of structural elements or the structure of such content can not be varied through alternate presentation formats, the tool successfully provides a repair within x seconds of the determination.
        4. Tester verifies that if it is determined that when text content is presented over a background image or pattern, the text is not easily readable when the page is viewed in 256 grayscale, the tool successfully provides a repair within x seconds of the determination.
        5. Tester verifies that if it is determined that when text content is presented over a background image or pattern, the text is not easily readable in default presentation mode, the tool successfully provides a repair within x seconds of the determination.
        6. Tester verifies that if it is determined that there are time limits as a part of a competitive activity, the tool successfully provides a repair within x seconds of the determination.
        7. Tester verifies that if it is determined that the content has not been reviewed, taking into account the following strategies for facilitating orientation and movement, applying as appropriate: breaking up text into logical paragraphs, providing hierarchical sections and titles, particularly for longer documents, revealing important non-hierarchical relationships, and dividing very large works into sections/chapters with logical labels, the tool successfully provides a repair within x seconds of the determination.
        8. Tester verifies that if it is determined that information is not provided that would allow an assistive technology to determine at least one logical, linear reading order, the tool successfully provides a repair within x seconds of the determination.
        9. Tester verifies that if it is determined that diagrams are not constructed in a fashion so that they have structure that can be accessed by the user, the tool successfully intervenes within x seconds of the determination.
        10. Tester verifies that if it is determined that where possible, logical tab order has not been created, the tool successfully provides a repair within x seconds of the determination.
        11. Tester verifies that if it is determined that where possible, the user is not allowed to select from a list of options as well as to generate input text directly, the tool successfully provides a repair within x seconds of the determination.
        12. Tester verifies that if it is determined that errors are not identified specifically and suggestions for correction are not provided where possible, the tool successfully provides a repair within x seconds of the determination.
        13. Tester verifies that if it is determined that checks for misspelled words are not applied and correct spellings are not suggested when text entry is required, the tool successfully provides a repair within x seconds of the determination.
        14. Tester verifies that if it is determined that where consequences are significant and time-response is not important, none of the following is true: (a) actions are reversible, (b) where not reversible, actions are checked for errors in advance, (c) where not reversible, and not checkable, a confirmation is asked before acceptance, the tool successfully provides a repair within x seconds of the determination.
        15. Tester verifies that if it is determined that a list is not provided on the home page of URIs to cascading dictionaries that can or should be used to define abbreviations or acronyms, the tool successfully provides a repair within x seconds of the determination.
        16. Tester verifies that if it is determined that the content has not been reviewed, taking into account the following strategies for determining the definition of abbreviations and acronyms, applying them as appropriate: (a) provide a definition or link (with the first occurrence) of phrases, words, acronyms, and abbreviations specific to a particular community, (b) provide a summary for relationships that may not be obvious from analyzing the structure of a table but that may be apparent in a visual rendering of the table, (c) if contracted forms of words are used such that they are ambiguous, provide semantic markup to make words unique and interpretable, the tool successfully provides a repair within x seconds of the determination.
        17. Tester verifies that if it is determined that the content has not been reviewed, taking into account the strategies for evaluating the complexity of content, applying as appropriate, the tool successfully provides a repair within x seconds of the determination.
        18. Tester verifies that if it is determined that a user can not select a different location for navigation elements in the layout of the page, the tool successfully provides a repair within x seconds of the determination.
        19. Tester verifies that if it is determined that the content has not been reviewed, taking into account common ideas from making content consistent and predictable, applying as appropriate, the tool successfully provides a repair within x seconds of the determination.
        20. Tester verifies that if it is determined that a list of technologies and features, support for which is required in order for the content to be operable, has not been determined and is not documented in metadata and/or a policy statement associated with the content, applying as appropriate, the tool successfully provides a repair with x seconds of the determination.
        21. Tester verifies that if it is determined that technologies and features on the required list are not available in at least two independently-developed implementations, the tool successfully provides a repair within x seconds of the determination.

      References

      1. ATAG 2.0 WD 21 Oct 2003
      2. WCAG 2.0 WD 27 Oct 2003