- From: Gez Lemon <gez.lemon@gmail.com>
- Date: Sun, 6 Nov 2005 06:30:40 +0000
- To: Matt May <mcmay@bestkungfu.com>
- Cc: "Bailey, Bruce" <Bruce.Bailey@ed.gov>, Paul Walsh <paul.walsh@segalamtest.com>, w3c-wai-gl@w3.org
On 06/11/05, Matt May <mcmay@bestkungfu.com> wrote: > Someone else within the last two weeks has outlined which > components of validity directly affect interaction with assistive > technology. Do they have a supporting algorithm that demonstrates they are certain they've caught every known combination of validity errors that could result in an accessibility barrier, or are these just best guesses? A single application could have test cases to which formal methods were applied, such as black box or white box testing for each component, but I don't see how every permutation of a validity error could be tested - did this person provide that information? If so, given the infinite permutations, how were the test cases generated? > If this is an issue of access to AT, then the least restrictive > means of meeting our goals is to require that those specific issues be > resolved. Today's AT? Tomorrow's AT? Are you aware of a restrictive model that has a testing procedure that captures every conceivable validity error for any AT? Whatever content we recommend people generate, it at least needs to be unambiguously parsed by software. Most people on this list recognise that validity errors can result in content being rendered by mainstream browsers such as IE, but not being accessible to AT - the problem is ensuring we cover all bases. > Further, a case was made at the f2f that those specific issues > _are_ covered by other guidelines. Ignoring the transparency issues of a case made at a face-to-face that members of the working group that were unable to attend couldn't possible be aware of, did the case made cover the issues outlined above? Best regards, Gez -- _____________________________ Supplement your vitamins http://juicystudio.com
Received on Sunday, 6 November 2005 06:30:46 UTC