See also: IRC log
<johnarwe_> MSM is in Prague, and sent regrets
resolution: minutes of 9/18 are approved.
john: Please register for the meeting if you are planning to attend.
pratul: I will update my AI eta.
kumar: I sent updated test plan last night. I will close the AI.
<johnarwe_> http://lists.w3.org/Archives/Public/public-sml/2008Sep/0022.html
john: msm opened bug for xml id constraint alignment with SML. He also sent email about interoperability. I will close the 2 corresponding action items.
<johnarwe_> wrt Ginny's first stmt, I see no evidence in 9/18 minutes that we decided to use a single smlif instance document for all test cases. In fact, each time this question came up IIRC people including MSM objected, since doing so prevents us from having tests for <locator>
ginny: I have some suggestions about text in section# 2. 1.a can also be an advantage.
kumar: I agree.
ginny: 2.b does not seem right because we are using sml-if.
kumar: I had meant #1 to be 'no sml-if used as container' case.
ginny: I will work with Kumar offline to revise the text.
<johnarwe_> the sense I had last week was general agreement that manual comparison of error messages was not useful for interop testing, but might be useful for finding bugs in one's impl.
ginny: If we know that a model has only 1 error and if 2 implementations produce different error results on validation then one of the implementation has a bug. in that case that implementation should fail the test even if it produces 'invalid' as the overall result.
pratul: we decided last time that manual comparison results will not invalidate test.
ginny: I am ok either way. That is, whether manual comparison results should invalidate the test or not. However, I would like the test plan to clarify that point.
kumar: I will clarify in the test plan that manual comaprison results will not invalidate test. It will be used as diagnostic aid in finding bugs in implementations.
john: let us discuss open issues
#1 We need to specify behavior when an optional feature is not supported.
pratul: If we try to define behavior when an optional feature is not implemented, that may involve changing the spec substantively. We have recently issued 2nd last call and we should avoid going towards 3rd last call.
ginny: I am not sure whether we
should require a consumer to do something when it encounters an
instance of an optional feature it does not support. should we
require it to fail? should we requrie it to ignore
silently?
... I am probably ok with the spec the way it is.
kumar: we can probably look at each optional feature case and see what the spec says.
ginny: locator: spec already says that the validator that does not support locators must treat the doc as not present.
sandy: schemabinding: We define that an implementation that does not support schemabinding must use all schema documents to construct a schema.
<johnarwe_> Otherwise, if an SML-IF consumer chooses not to process the schemaBindings element, then the SML-IF consumer MUST compose a schema using all schema documents included in the SML-IF document and MUST use this schema to validate all instance documents in the interchange model.
<johnarwe_> excerpt above is from LC 2 SML section 5, last parag
ginny: locid: The spec does not say what a consumer should do when it does not support it.
john: baseURI: The spec already covers all relevant cases. both smlif:baseURI and xml:base present. when none is present etc.
<pratul> I need to go now - bye!
kumar: I will remove last line of item 2 on page 4. I will add that test results for optional features will not be used for comparison with other implemenations. It will be used for making sure that an implementation that supports it has correct implementation.
resolution: group agrees with Kumar's previous statement .
#2 Define directory structure to hold files related to interop testing.
kumar: If we are planning to use COSMOS tests, we can simply say that we will use the same dir structure as COSMOS.
john: I am ok with it.
resolution: dir structure in section 3 is approved. Under each test directory (eg, testsForOptionalFeatures) we will use some way to group tests by the feature they test.
#3 Need to decide test result format
john: did we miss the test-metatdata related open issue?
kumar: I used 'test result format' to mean the test-metatdata issue.
john: msm wanted to consider combinations of conforming/non-conforming and valid/invalid.
kumar: a valid model is always
conforming. a non-conforming model is not valid. therefore, the
cases of interest are conforming+valid/invalid and
non-conforming.
... when a model is non-conforming (eg, non-welformed xml)
implementations can produce widely varying outputs therefore it
will be very hard to compare results of non-conformance tests
in an automated way.
ginny: I would like to defer the decision on the test-metadata till I better understand how the testing process will be structured end-to-end.
kumar: we are not freezing the decision here. we will mention what we think should happen. we can revisit it if the group sees the need later on.
<johnarwe_> issue 4 in the draft is ill-formed: it should be looked at as follows.
<johnarwe_> Each implementation's behavior, for any feature, whether supported by the implementation or not, is prescribed by the spec.
<johnarwe_> As a consequence of this, if the spec prescribes or allows different behavior when a feature is supported vs not supported, two implementations, one of which supports the feature and the other of which does not support the feature, may exhibit different behaviors.
<johnarwe_> ...and Kirk can fix the awkward "of which"s in there\
resolution: the group agrees with the previous statement by john.
Last Scribe Date Member Name Regrets pending 2008-05-22 Lynn, James Until further notice 2008-07-10 McCarthy, Julia Until further notice 2008-09-04 Gao, Sandy 2008-09-11 Wilson, Kirk 2008-09-18 Smith, Virginia 2008-09-25 Kumar, Pandit Exempt Arwe, John Exempt Dublish, Pratul Exempt MSM