Some suggestions for the interop matrix

Guillermo, folks,

Following one of my action items, I sent a message[1] to the QA WG,
giving pointers to what we have been doing and explaining
the situation. I also asked to see if someone from the WG could
join us in the next teleconf, so that we can ask him/her
our questions directly.

I've only received an acknowledge of reception of the message
so far.

Nevertheless, I did some work and the only two available tools
we have today are those that were created for the Soap [2] 
and RDF Core/OWL [3] test suites. 

You'll find here below a summary of my findings. I hope this can
give us some leads and inspiration for our work.

-jose

The approach to define these test suites is the following:

1. Identify the concepts in the spec that we want to test 
   and assign them a name so that we can refer to them.
   Moreoer, each concept referes to all the scenarios that
   test it.

2. State some rules if you assume that something will always
   take place (e.g., our common key sets).

3. Define a number of scenarios, describing what is expected,
   giving the input and output messages. Also say which parts
   of the spec this scenario is taking into acocunt using
   the labels defined in 1.

The working group approves each scenario and developers say which
scenario they support (rather than which feature they support).

This approach makes it easy to tell at a glance if all the concepts
of the spec have been tested and if we have enough implementations
supporting each one. I think that it also gives a more reliable
markup to define scenarios.

The tools that exist were customized to their task and not mean
to be a general test suite tool. 

The tools used for the Soap test suite use XML[4]. There is a 
number of XML files and XSLT scripts to generate the test suite. 
The messages for the test scenarios are at [5]. The DTD that's used 
is an altered specprod one. The source XML file is 
soap12-testcollection.xml and the XSL script seems to be
ts-html.xsl (haven't tested it yet).  The advantage here is that 
we have something can be used straightforward, possibly with minimal
adaption. The bad thing is that everything must be done by hand.

These tools are meant for building the test suite. There is no 
provision for filling up an interoperabilty matrix. The Soap
implementation report[6] was apparently done by hand.

The tools that were used to generate the OWL and RDF core test suite 
AND implementation reports use RDF and a specific RDF ontology.
There are a number of scripts written in python. These tools
look really interesting, but I think that adapting them and
understanding them can be complex, as there are many different
code modules. However, it is interesting that they generate
everything. And using RDF is much more interesting when you want
to know, e.g., have we covered everything in the spec? Do we
have enough implementations or other similar questions. If this
is interesting, I'd propose to either have a python expert look
at the code and evaluate it, or we can reuse the ontology and
write our own tools. I can contribute time to make those tools,
as I'm already RDF aware and have experience with redland and other
RDF tools.

For getting the implementation reports, we have two choices. We can
have a web form or standalone tool that implementors can use to say
which tests they have done and which generates an RDF or XML report
that we can feed to whatever tool we have to build the interoperability
matrix (I think it's better to avoid constructing it by hand).

The other alternative is to use a wbs form[1]. This an on-line system
we have at W3C for making surveys and publishing results. The pages
that are produced are in XHTML. The advantage I see with this one is
that we can have a living questionary and add new questions (scenarios
in fact) when we want and users can complete them, without loosing their
previous answers. Moreover, the questionary results are produced 
automatically.

The most bothersome part is that you need to add question by question
using an on-line form.
   
[1] http://lists.w3.org/Archives/Public/www-qa-wg/2004Jun/0076.html
[2] http://www.w3.org/2003/11/results/rdf-core-tests
[3] http://www.w3.org/2003/11/results/rdf-core-tests
[4] http://www.w3.org/2000/xp/Group/2/06/LC/
[5] http://www.w3.org/2000/xp/Group/2/06/LC/msgs/
[6] http://www.w3.org/2000/xp/Group/2/03/soap1.2implementation.html
[7] http://www.w3.org/2002/09/wbs/1/

Received on Thursday, 1 July 2004 12:39:04 UTC