W3C home > Mailing lists > Public > www-xkms@w3.org > April 2004

QA tips for builidng a test suite/interop matrix

From: Jose Kahan <jose.kahan@w3.org>
Date: Fri, 16 Apr 2004 19:27:02 +0200
To: www-xkms@w3.org
Message-ID: <20040416172702.GH11372@inrialpes.fr>

Sorry it took some time to forward the message. I think that we
already need to think about:

  identifying ownership and copyright info on both submitted
  materials, and published materials [1] [2] [3]. 

For example, do we expect the test vectors that have been contributed
to become public domain? The copyright belongs to the submitter, w3c,

I like one of the ideas that Mary mentions and it joins one I had been
having: have a number of tests that can be executed automatically
by a script. What I had in mind for testing servers was to have 
some known public key pairs, have a server install them and then 
run the script with a battery of tests. The script should get the 
reply and compare it to the expected result and see if its output
is compliant.

As we have both synchronous and asynchronous modes, I thought that
if it's possible to configure the server to work in either mode
or to know if has to be asynchronous for a given key, first set the
server for one or the other mode, do the tests, then set it for the
other mode and do the tests again.

For testing the clients, I think we need a server that sends back
fixed output and a test walkthru. A bit like "send first request
to the server, (server checks errors), server replies. Client
must check that the answer is ... ".

Just some ideas.


attached mail follows:

Hi Jose,
We have been involved in a number of testing efforts within W3C - I'll 
try to pass on some information on what we have done with these groups.
In almost all of the efforts, we have initially developed a process
document - 
It's been useful in identifying how the testing process is going to work
In particular, in identifying ownership and copyright info on both
materials, and published materials [1] [2] [3].  Typically, in these
efforts, we 
try to put together a test description dtd [4], or schema [5], that both
the test and gives enough information about the test to put the
processor in 
the proper mode to run the test - ie, valid/invalid, entities,
whitespace, etc.
Running the tests and reporting the results is typically an exercise
left to the 
user :-).  The WG's that have used this process to exit CR have required

companies to run the tests and report back their results in a predefined

format, which is also defined in the test description file.  Then it is
straight-forward to come up with a transformation to display the results
in a 
color-coded table.
You might also want to check out the way that the DOM Test Suite [6] was
together, particularly if you are interested in testing multiple
bindings.  Here, 
we developed a DOM Test Suite Markup Language, developed tests in this 
language, and used transformations to convert them to specific bindings.

Here, instead of using a common test description language, the tests
written to fit in with JUnit and JSUnit testing frameworks.  The test
are run, 
typically by Curt Arnold, and he brings problems forth to the WG.
it can be done this way, it is very labor intensive, and sometimes bugs
I hope this helps - if you have further questions, I'll do my best to
Mary Brady
[1] http://www.w3.org/2002/06/DOMConformanceTS-Process-20020627
[2] http://www.w3.org/XML/Test/XMLConformanceTS-Process-20031210.html
[4] http://www.w3.org/Style/XSL/TestSuite/tools/testsuite.dtd
[6] http://www.w3.org/DOM/Test/
Received on Friday, 16 April 2004 13:27:36 UTC

This archive was generated by hypermail 2.3.1 : Tuesday, 6 January 2015 20:31:42 UTC