RE: Testing.. a few basic questions.

Hi David!

> Looking at the current test suite, we have a good structure of well 
> understood tests but the thing missing is how they're going to be driven 
> and monitored. 
> I may simply be missing 'obvious' things because I haven't done one of 
> these compliance/interop testing processes before.

I think we have yet to lay down how we are going to run these tests.

> Are we expecting each implementation to have it's own coded client which 
> will send sample messages using its own programming model or are we 
> expecting a suite that simply pushes the sample messages over the wire for 
> the sever to respond to? (or both I suppose, I can see merit in both 
> approaches...)

I'd anticipated each party generating and being able to respond to messages,
i.e. participate as node 'A' and 'B' in our scenarios.

In addition, we can also have a cannned client which fires messages,
and sends test responses based upon the XPaths identified by the suite. 
I have a proto version of such a beast which I'll aim to package and
submit.

> Presumably with either of the above we're talking about having a page 
> which allows you to kick off the 'client' with the URI to target the tests 
> at?

That sounds interesting - we could provide a web form interface for the 
canned client, which others could implement with their own toolkits.
I'd assumed that it would be sufficient to coordinate tests via the phone, 
IRC/IM or when we meet F2F.

> In terms of monitoring, we need a tool that will monitor HTTP and drop the 
> traffic to a/multiple file(s).Apparently we can't use WS-I monitor. Any 
> other suggestions?

I'm not sure anything prevents people from using the WS-I (or whatever) tools
to trace the messages exchanged, however I think there was discomfort at our
mandating the use of, say, the WS-I monitory.

> We need to correlate those files to each test. The simplest method would 
> simply be message ordering to match the test ordering. The downside to 
> that would be if a mesage got missed, the 'marking' stage would show many 
> more failures than appropriate. The other option is a marker somewhere in 
> the message on the wire.. possibly in the message that is being notified 
> or echoed?

I suggest we provide a 'log' file format (XML of course!) which has a series
of captured messages, labelled with the test, message ID it purports to exhibit.
We can then have a standard ant / XSLT script which checks the XPaths against
the log and writes a report.

As to how each message is labelled, well... that's something that can
be resolved, as you say, by:

1) running each test in order
2) or adding a flag into each message (dangerous for cheats like me :)
3) a manual step when compiling the log file

I think that's something we can just leave to the devices of those
compiling the log files.

> Hopefully not all of these are obvious (to everyone other than me),

Something that seems 'obvious', but isn't specified, is going to
harbour surprises..

I'll try to sketch something out to add to the page about 'running the tests' ..

Paul

Received on Monday, 28 November 2005 18:03:41 UTC