W3C home > Mailing lists > Public > www-qa-wg@w3.org > January 2004

Re: RDF Core test driven development and QA Test Doc

From: Brian McBride <bwm@hplb.hpl.hp.com>
Date: Thu, 08 Jan 2004 09:36:50 +0000
Message-ID: <3FFD24B2.2050206@hplb.hpl.hp.com>
To: Sandro Hawke <sandro@w3.org>
Cc: david_marston@us.ibm.com, www-qa-wg@w3.org, Jeremy Carroll <jjc@hplb.hpl.hp.com>

Sandro Hawke wrote:
>>DM>> "Checkpoint 1.3. For each class of product the WG intends to
>>DM>>specify, determine and document the testing approach to be used for
>>DM>>its conformance test suite."
>>SH>Ah, but RDF Core chose to not define the products being tested,...

Judging by David's next reply, the term 'product' is used in a 
specialist sense.  I made the same mistake as you Sandro, though I think 
we can be forgiven since the term is not defined in the spec I read or 
the QA Glossary.

David seems to be interpretting "the product of a specification" to mean 
that which is specified by the specification.

QA Folks: you might take that a suggestion to add the term "product" or 
"product of a specification" to the QA glossary.

David also, I think, is using the term 'specification' in a specialist 
sense, i.e. that specifications define the behaviour of software. 
Declarative statements are 'definitions'.  If that is what is meant by 
these terms in the QA specs, that should also be spelled out in the 

>>How did you demonstrate the existence of more than one conformant (and
>>presumably interoperable) implementations?
> As I understand it (as a close observer from the outside), the group
> saw itself as defining a language/data-format, not any particular
> software at all.

Yes.  We saw ourselves as defining a language and its meaning.

   It wasn't really clear how interoperability could be
> demontrated.   They debated defining "a conformant RDF parser" but it
> was tricky enough that they didn't do it.

It wasn't the trickiness that was the issue.  We had to keep one eye on 
the calendar as we were taking rather a while.  To start defining things 
like parser we would need some notion that parsers existed.  My concern 
was that we have to define a processing model that identified the 
various sorts of processors there were and that was a task that risked 
taking a while and was unecessary.

> So the tests demonstrate interoperability by way of an unnamed,
> undefined set of classes of implementations.

Sorta, though I would spin it differently without mentioning 
implementations.  The rdf/xml language syntax, we defined  in terms of a 
grammar over, loosely speaking, input symbols that SAX (XPATH nodeset) 
events, i.e. the equivalent of a schema.

The RDF/XML language semantics is defined in terms of:

   - an abstract syntax - definined in terms of mathematical graph

   - an equality function between two graphs - a way of determining 
whether two graphs are equal

   - the syntax of a simple language called n-triples

   - an isomorphism between n-triples syntax and the RDF abstract syntax

   - a function that transforms rdf/xml to n-triples

With this machinery we can, given an arbitrary RDF/XML document D:

   - transform D to its equivalent n-triples document T

   - determine the graph that T describes

   - given that the graph of D equals the graph of T we now have the 
graph of D

   - and the RDF semantics document defines the semantics of an abstract 
graph, so now we have the semantics of D.  Done.

No mention of parsers.

Now a software writer can say, I have a piece of software.  I call it 
the foobar RDF parser.  Foobar correctly implements the transform from 
RDF/XML to n-triples.  Software can claim to correctly implement the 
functions we have defined.  But we don't have to define 'classes of 


> [ I've added co-chair Brian McBride back to the CC list to correct me
> if I've got it wrong. ]

Thanks.  I'm finding this quite interesting.  As you can see, I have a 
slightly different perspective.

> So the question here is whether folks should take the intermediate
> step of spelling out what is expected to pass the tests -- what gets
> the seal of approval if it does pass the tests -- and what does that
> seal of approval say?     RDF Core said they don't care enough about
> the seals to figure it all out; WebOnt said they cared enough to do
> some of it.

Its not that we didn't care.

I'm beginning to wonder whether the QA specs have taken too narrow a 
perspective on what the 'product' of a specification is, i.e. that it 
must be software.


>>My HTML analogy: you prove the interop by having several compatible
>>user agents (browsers), so the spec defines what the user agent must
>>do, in broad terms, when it is given a defined element (e.g., <a>).

That is a procedural notion of specification.  It talks about what a 
user agent must *do*.  It is requiring a specification to define 
behaviour.  The QA specs allow for declarative forms of specification 
too, yes?

To take the <em> example, and being a bit careless with language, as I 
don't have all the machinery needed set up properly:

1. RenderingHtml is a transform from an HTML document to a physical form 
that can be perceived by the canonical human user.  This document 
specifies constraints on conforming transforms.

2. Given document fragment D1


   and document fragment D2


   RenderingHtml(D1) should be perceived by the canonical human user to 
be more prominent than RenderingHtml(D2).  There are a number of 
techniques that mey be used to give greater prominence to text.  These 
depend on the medium in which the text is rendered.  Examples include ...

3.  RenderingHtml("<em>text1</em><em>text2</em>) = 

The example David provided in his later post didn't include 3. I include 
it as an example something one can say clearly and with precision with 
the declararive style of specification.  And it is nicely machine testable.

Now a renderer, lets take an example at random, say a printer, can claim 
to implement a rendering function conformant with the constraints specified.

The QA framework takes acount of and supports this approach to 
specification, right?

Received on Thursday, 8 January 2004 04:37:57 UTC

This archive was generated by hypermail 2.3.1 : Tuesday, 6 January 2015 21:14:32 UTC