- From: Sandro Hawke <sandro@w3.org>
- Date: Wed, 07 Jan 2004 12:48:41 -0500
- To: david_marston@us.ibm.com
- Cc: www-qa-wg@w3.org, Jeremy Carroll <jjc@hplb.hpl.hp.com>, bwm@hplb.hpl.hp.com
> DM>> "Checkpoint 1.3. For each class of product the WG intends to > DM>>specify, determine and document the testing approach to be used for > DM>>its conformance test suite." > > SH>Ah, but RDF Core chose to not define the products being tested,... > > How did you demonstrate the existence of more than one conformant (and > presumably interoperable) implementations? As I understand it (as a close observer from the outside), the group saw itself as defining a language/data-format, not any particular software at all. It wasn't really clear how interoperability could be demontrated. They debated defining "a conformant RDF parser" but it was tricky enough that they didn't do it. So the tests demonstrate interoperability by way of an unnamed, undefined set of classes of implementations. I think of them as: (1) "RDF Parsers", which are software libraries. To be tested, they need to be called by a custom test harness which understands the particular parser's API, the test format, and the non-normative N-Triples format. (2) Several classes of "RDF Reasoners", which I can't name exactly, but they vary along the axes of which semantics they implement, which datatypes they support, and how complete (or semi-complete) they are. I didn't understand the semantic options correctly when I implemented my RDF Reasoner (Surnia), which meant it failed some tests. I brought this up with the WG and got the situation clarified. Surnia supports no datatypes; and it's complete. (It's fairly easy to be complete if you don't support datatypes.) [ I've added co-chair Brian McBride back to the CC list to correct me if I've got it wrong. ] So the question here is whether folks should take the intermediate step of spelling out what is expected to pass the tests -- what gets the seal of approval if it does pass the tests -- and what does that seal of approval say? RDF Core said they don't care enough about the seals to figure it all out; WebOnt said they cared enough to do some of it. > SH>WebOnt decided to define only some of the products... > > This fits the way QAWG handles it. You don't need to define conformance > criteria for every product that may handle some aspect of what you > specify, just those that represent the realization of your goals. The > most common example is that many specs have some XML dialect but don't > impose any requirements on editors that compose the XML. > > SH>I'm not sure their reasoning, but I'm fairly comfortable with it by > >analogy to HTML. It makes sense to define the language and recognize > >full well that you don't know all the kinds of processors there might > >be for it. > > My HTML analogy: you prove the interop by having several compatible > user agents (browsers), so the spec defines what the user agent must > do, in broad terms, when it is given a defined element (e.g., <a>). > HTML editors and other tools need not be addressed. If you don't try > to have various browser implementations do the "same" thing with an > <a> element, what was the point of specifying standard HTML? The > expectation of behavior of <a> was set by the WG. > .................David Marston You could say "the 'em' element indicates the included text content has emphasis." No need to mention a browser or anything. This kind of language never has RFC 2119 MUST/SHOULD language in it, because nothing *does* anything. It's all about what syntactic structures mean. It is hard to write a test suite for that -- what does "emphasis" mean, after all? -- but RDFCore and WebOnt were able to specify what many of the syntactic elements meant in a precise mathematical way, lending to fairly straightforward testing of some probably-useful program components. Some elements like the "obsoletes" relation between ontologies, are left to human judgement (as "emphasis" would have to be). -- sandro
Received on Wednesday, 7 January 2004 12:55:15 UTC