W3C home > Mailing lists > Public > public-ws-addressing-tests@w3.org > November 2005

Testing.. a few basic questions.

From: David Illsley <david.illsley@uk.ibm.com>
Date: Thu, 24 Nov 2005 11:55:55 +0000
To: <public-ws-addressing-tests@w3.org>
Message-ID: <OF9B4F3F69.363EDA80-ON802570C3.003E0202-802570C3.00418BC0@uk.ibm.com>
Meant to send this earlier in the week but have been off work ill.

Looking at the current test suite, we have a good structure of well 
understood tests but the thing missing is how they're going to be driven 
and monitored. 
I may simply be missing 'obvious' things because I haven't done one of 
these compliance/interop testing processes before.

Are we expecting each implementation to have it's own coded client which 
will send sample messages using its own programming model or are we 
expecting a suite that simply pushes the sample messages over the wire for 
the sever to respond to? (or both I suppose, I can see merit in both 

Presumably with either of the above we're talking about having a page 
which allows you to kick off the 'client' with the URI to target the tests 

In terms of monitoring, we need a tool that will monitor HTTP and drop the 
traffic to a/multiple file(s).Apparently we can't use WS-I monitor. Any 
other suggestions?

We need to correlate those files to each test. The simplest method would 
simply be message ordering to match the test ordering. The downside to 
that would be if a mesage got missed, the 'marking' stage would show many 
more failures than appropriate. The other option is a marker somewhere in 
the message on the wire.. possibly in the message that is being notified 
or echoed?

Hopefully not all of these are obvious (to everyone other than me),

David Illsley
Web Services Development
IBM Hursley Park, SO21 2JN
+44 (0)1962 815049 (Int. 245049)
Received on Thursday, 24 November 2005 12:13:39 UTC

This archive was generated by hypermail 2.3.1 : Tuesday, 6 January 2015 19:54:41 UTC