W3C home > Mailing lists > Public > public-ws-addressing@w3.org > August 2005

Re: Interop testing

From: Anish Karmarkar <Anish.Karmarkar@oracle.com>
Date: Tue, 09 Aug 2005 11:19:21 -0700
Message-ID: <42F8F3A9.7030908@oracle.com>
To: Mark Nottingham <mark.nottingham@bea.com>
CC: Katy Warr <katy_warr@uk.ibm.com>, paul.downey@bt.com, public-ws-addressing@w3.org

Another thing that has helped in the past (for XMLP) was -- having 
concalls (non-official, non-WG) for implementers. This was useful not in 
the initial phase of the interop effort but somewhere in the middle of 
the effort when implementers had specific issues/problems with specific 
implementations or with specific tests (which in certain cases, resulted 
in changes to the tests). These issues/problems were easier to discuss 
on a concall rather than a long email thread.

I would suggest having such concalls, if the implementers think it would 
be useful.


Mark Nottingham wrote:
> Hi Katy,
> On 05/08/2005, at 3:34 AM, Katy Warr wrote:
>> 1. Is this likely to be going to be f2f testing or will  participants 
>> simply publish remote endpoints?
> We'll have to figure that out; it's pretty much up to us. In past W3C  
> WGs I've participated in, people have self-submitted their test  
> results; i.e., they're not verified or tested for interop, just for  
> feature coverage and correctness. We can certainly do interop testing  
> if we like, of course; as Philippe has mentioned, we could hold an  
> internop workshop to do more.
>> 2. If the procedure is likely to be that the participants publish  
>> remote endpoints (without need for meeting), what sort of timescale  
>> is usually expected for this testing phase?  For example, would  
>> participants have a few weeks to run tests and resolve problems or  
>> would the interop be more intense - such as a week focussed on  
>> problem resolution and testing?
> Don't know yet. In past interop testing efforts I've been involved  in, 
> we had a number of small, intense and focused interop mini- sessions 
> leading up to a bigger event.
> The straw-man that I have in mind is a (semi-sequential) list of  things 
> that need to happen; if we had discussion around it, it might  help 
> answer these questions.
> 1. Agree on and document testable features, and their optionality
> 2. Agree on and document test targets (e.g., service instance,  service 
> consumer)
> 3. Design a test scenario for each feature/target combination as  
> applicable, with success criteria
> 4. Hold a number of virtual (i.e., over the net) interop sessions  
> around specific features in isolation
> 5. Give feedback to implementors / specification from interop testing
> 6. Hold one F2F interop event testing features together (probably NOT  a 
> normal meeting of the WG)
> 7. Documentation of interop results to exit CR
> Comments?
> Cheers,
> -- 
> Mark Nottingham   Principal Technologist
> Office of the CTO   BEA Systems
Received on Tuesday, 9 August 2005 18:19:41 UTC

This archive was generated by hypermail 2.3.1 : Tuesday, 6 January 2015 21:04:10 UTC