Question on running the tests in the test suite

We are trying to run tests in the test suite with our XBinder data 
binding tool and are struggling to determine what constitutes a 
successful test.  Note that our tool is primarily an XML schema binding 
tool and does not have the capability to handle the WSDL files as 
defined in the suite at this time.  However, it appears that you have 
provided XSD files and XML instances for all tests which is what we are 
trying to use. 

Where we are running into difficulty is comparing the XML instances we 
generate with the original input instances.  Our process is to create a 
read/write driver with our data binding tool for each test schema.  This 
driver reads in an instance, decodes it into data variables, and then 
re-encodes to form a new instance.  We then compare the input to the 
output.  The problem is that what comes out does not match what goes in 
for a variety of reasons - mostly because the input instances contain 
extra namespace declarations that are not needed and therefore dropped 
when the data is re-encoded. Therefore, the output instances are 
equivalent to the input instances, but don't match exactly.

So the question is how to compare input with output?  Do the patterns 
you have specified help with this in any way?  Or is there some kind of 
XML differencing tool that you can recommend that can cope with 
differences like this? (we have not been able to find any).  Or are we 
totally off base as to how to run the tests?

Regards,

-- 
Ed Day
Objective Systems, Inc.
http://www.obj-sys.com

Received on Tuesday, 3 July 2007 15:21:55 UTC