Draft Test for 2.4.1 (Text Search)

Below is an attempt to create a test script for 2.4.1 (Text Search), following the example of Jan's reworking of Jeanne's draft.

However, the effort brought up a number of questions about testing this SC and about the test format in general. These follow the draft test.


Draft test (with nested lists numbered to make the nesting clear, even though they won't be in the final version):

2.4.1 Text Search: The user can perform a search within rendered content (e.g. not hidden with a style), including rendered text alternatives and rendered generated content, for any sequence of printing characters from the document character set. (Level A)
Test Resource: Accessible test content file (Level A, AA, AAA):
This accessible content is needed to test criteria such as whether text alternatives are properly displayed. The test content should:
- be a complete file in the "included" web content technology(ies) (e.g. HTML4) with no known accessibility problems to the given level.
- make use of as many WCAG 2.0 techniques as feasible at the given level (these should be identified in the file for ease of reference).
- include various types of non-text content (images) and time-based media with their required and optional alternatives.
- include various types of generated content (e.g. numbers generated by the LI element, content inserted by styles such as :after {content:...}, and content inserted by scripts).
- include a wide range of characters in multiple languages, including normal letters, symbols, characters not found in the primary language of the user agent, characters not supported by the browser's current font (i.e. those represented by a placeholder symbol such as an empty rectangle), and strings in both left-to-right and right-to-left language if the browser and platform support them. These should be found in normal content, alternative content, and generated content.

001 Test Assertion: Search can find strings within rendered normal, alternative, and generated content.
1. Identify the types of normal content, alternative content, and generated content contained in the "Accessible test content file". Alternative content might include alternative text, long descriptions, captions, transcripts, captions, audio descriptions, fallback content, etc.. Generated content might include automatic numbers and bullets for lists, content generated by styles, and content inserted by scripts.
2. Identify the types of text supported by the user agent and found in the "Accessible test content file", such as normal letters, digits, punctuation, symbols, characters not found in the primary language of the user agent, characters not supported by the browser's current font (i.e. those represented by a placeholder symbol such as an empty rectangle), ASCII and non-ASCII characters, and strings in both left-to-right and right-to-left language.
- In some cases there will be multiple alternatives (i.e. instantiated techniques) for a given type of alternative (e.g. captions using SMIL or HTML5 video tracks).
3. For each type of content identified in Step 1:
   3.1 If the text is not already rendered (e.g. alternative content), examine the user interface (or search the documentation) to identify the mechanism for rendering the text (e.g. display on the screen, played through the speakers, etc.). The mechanism(s) might include global preference, context menus, etc. Adjust the presentation where necessary so that the text is rendered. These mechanisms may already have been identified for testing Success Criteria 1.1.1.
     3.1.1 If no mechanism exists to display this type of content, then *Go to the next type of content* (return to Step 3)
   3.2 Examine the user interface (or search the documentation) to identify the mechanism for searching for text within the type of content (normal content, alternative content, or generated content). In most cases a single mechanism will search all these contexts, but in some cases separate mechanisms may be used (e.g. searching normal document text vs. searching closed captions).
     3.2.1. If no mechanism exists, then select FAIL.
   3.3. For each type of text identified in Step 2:
     3.3.1 Activate the mechanism to search for the text string identified in Step 2.
     3.3.2 If there is no indication that the string was found in the specific type of content being searched, then select at the string was found in the specific type of content being searched, then select FAIL.
4. Select PASS (all searches for different types of text in different types of rendered content have been successful)


A. Questions about tests in general:

A.1 As I said in separate email, it seems that the current format of test script can be extremely inefficient, because it would often be much quicker to make one pass through the product testing several SC on each area, rather than to make multiple passes testing one SC each time. For example, as long as you're testing the ability to search various content types (2.4.1), you might as well test searching those types in reverse (2.4.2), scrolling to bring found content into view (2.4.3), and so forth at the same time. Is it possible for us to combine the tests for multiple SC when it makes sense, or do we have to essentially repeat the procedures for each SC?

A.2 To what extent should our tests address edge cases? For example, should a user agent pass the requirement to search for text even if it cannot search for and/or find double-byte characters, right-to-left strings, one particular special-cased character, or strings that look contiguous to the user but are actually broken up in the HTML source, text in controls or text that is obscured by other content? If we expect those edge cases to be handled, should we require testing them? Could we and should we draw a line somewhere between the extremes of requiring the sort of labor-intensive, exhaustive testing that the product's Q&A team should do, versus only testing the most obvious tasks and therefore allowing them to pass even though they have major, gaping holes?

A.3 When a loop is used to test multiple conditions or formats, and any one causes the entire test to FAIL, this means test will only generate a single pass/fail, rather than a complete list of content types which fail, even though the latter would be more useful for readers and developers. In some cases this could result is an organization repeating the test-fix-test cycle over and over again, each time the testers reporting a single failing case and not proceeding to find other cases that would also fail.


B. Questions about testing success criterion 2.4.1 (Text Search):

B.1 Do we require searching across iFrames, into edit fields, etc.?

B.2 How could you test 2.4.1 (Text Search) without also testing 2.4.3 (Match Found)? What would it mean to do a search that doesn't indicate the matched text?

B.3 It's possible to imagine implementations that would meet the letter of the SC without meeting its spirit. For example, a command that merely displayed a list box with page/line references for all occurrences of a string would comply, but it wouldn't be very useful for an screen-based browser.

B.4 "Document character set" is defined as "The internal representation of data in the source content by a user agent." I don't understand that: it doesn't say anything about a character set, and in fact the user agent's representation of the DOM could fit this definition.


C. General observations:

G.1 When writing these it's easy to slip into verbose, legalese style with lots of redundancy, which could make it hard for the reader/tester trying to use them.

G.2 I found this exercise pretty unsatisfying. There seemed to be a lot of handwaving, relying on the audience to identify a lot of test cases and craft their own test documents. When we do provide them (e.g. for the major formats), it would be a lot easier for the testers if we could provide more concrete directions for their use (e.g. "Search for each of the following 15 strings...").

G.3 It's really difficult to write test cases that are technology neutral. For example, it's hard not to make assumptions that a text search facility will "move" to the next occurrence, but there may be some that don't.



     Thanks,
     Greg

Received on Thursday, 20 June 2013 06:07:33 UTC