- From: Nick Kew <nick@webthing.com>
- Date: Sat, 22 Jun 2002 21:39:40 +0100 (BST)
- To: <w3c-wai-er-ig@w3.org>
Thanks Chaals and Nadia for your comments. I'm trying to think through exactly how test definition and test application relate to each other, and it seems to me that this exercise calls for an extra logical level in my reports (and very probably in other automated test tools. The current format for automated accessibility reporting in Valet is: { element, result, checkpoint } where element is clearly defined, checkpoint is selected sometimes arbitrarily, and result is a likelihood of failure against the checkpoint. Example: a test finds that the HTML element has no lang or xml:lang attribute. It reports a possible failure against the checkpoint calling for the primary natural language to be identified, but the result is uncertain because the language may have been identified by other means. Note that Valet only reports tests when the subject fails them. The need for a confidence value arises because failing a test doesn't necessarily mean a checkpoint has been violated, as in the above example. Underlying this is a somewhat different test: { element, result, {test} } Using this we can say unambiguously that the test has been passed or failed. In terms of RDF, that's going to look much prettier:-) > >The actual specification for a Test comprises: > > (1) What is it testing? One or more Element or Attribute > > { (Name)+ , (ELEMENT|ATTRIBUTE) } > > (2) What is it testing against? One or more guideline of a test suite. > > { SuiteID , href, failuremessage } > > (3) What test is being applied? A test that returns a pass/fail > > { TestClass, ConfidenceLevel, (Parameters)* } This still stands, and I think it should be expressible as RDF with some earl elements, as you noted below: > EARL has earl:TestCase and earl:id and earl:suite, which might be worth > re-using. > Sounds good. The "What is it testing?" bit might be a little ambiguous, > depending. There's nothing ambiguous in the automated tests in Valet. As regards human-driven or mixed tests, identifying the testsubject unambiguously is an application of our ongoing work on markup metrics and pointers. I may or may not find time to post more thoughts on the subject before the f2f. > >The second part of this is to permit users to program their own TestClass. > >This is an arbitrary function, acting on a DOM representation of the > >document being tested, so writing one is a programming task. I don't see how > >this could usefully be incorporated into the test schema, (and I expect > >to make this a C++ API for users instead), but I'm open to suggestions. > > This should be possible in RDF if you define some basic logic terms and > utility DOM functions. Given, it might be more of a pain than you want to > deal with, but it would be a neat experiment. Your code would end up > being, well, an rdf-to-C compiler. I don't know if that's semantic web > "pure" anymore. Can I collar you at the f2f to discuss this? -- Nick Kew Available for contract work - Programming, Unix, Networking, Markup, etc.
Received on Saturday, 22 June 2002 17:33:46 UTC