- From: Lieske, Christian <christian.lieske@sap.com>
- Date: Wed, 15 Feb 2006 11:15:13 +0100
- To: "Yves Savourel" <yves@opentag.com>, <public-i18n-its@w3.org>
Hello everyone, Please find my comments below (starting with "CL>"). Best regards, Christian -----Original Message----- From: public-i18n-its-request@w3.org [mailto:public-i18n-its-request@w3.org] On Behalf Of Yves Savourel Sent: Sonntag, 12. Februar 2006 06:51 To: public-i18n-its@w3.org Subject: RE: On conformance Hi Christian, Felix, and all, > So I think you should provide all tests which you > think which are necessary, not only the ones for > "terminology". This might be a very complicated task, > *if* you assume a lot of conformance levels, and > even conformance specific conformance criteria to a > single data category. Our data categories are quite divers: Ruby as little do to with translatability for example. This means it probably make sense for the applications that will implement ITS to provide support for only some of the data categories. CL> Or only provide _limited_ support (cf. the discussion on insitu/dislocated). For example a translation tool would implement the translatability data and localization information categories but completely ignore terminology. CL> I am not sure that all translation tools would do that. Therefore I think we have to test the 6 data categories separately (I think <its:span> is something different and can be tested with along with all the in situ cases). >From the "rules location" viewpoint we have: in XML DTD, in XML Schema, in RELAX NG, external dislocated, internal dislocated, and in situ... 6 cases. In addition, I think it's important to also have test cases for each data category where all the different "rules locations" are combined. So 7 cases. This gives us the following matrix: http://www.w3.org/International/its/tests/#Summary Which is ... 42 cases overall (although there maybe a few cases less as not all types of rules location apply to all data categories). I think it's important that we provide at least one standalone test case for each of these combinations. It is quite a bit of work, but it is probably the only way to ensure ITS is sound. As far as "processors" *compliance*. I think we don't have to define a level for each case. Maybe we can say that an application is ITS compliant when it implement sucessfully at least one of the data categories(?) and that it should state which one(s) with any compliance claim. CL> I like Yves' approach of distinguishing between test cases and conformance/compliance. From CL> my point of view test cases can help with the following: CL> CL> 1. verify that the framework adequately addresses an issue CL> 2. possibly help with the definition of conformance CL> 3. testing conformance CL> CL> I think that the design of the test suite (that is the collection of test cases instrumented with CL> input, output, id etc.) which Yves has drafted is very promising. CL> CL> I am still not sure about the granularity of conformance we should be aiming at. Possible pros and CL> cons for a fine grained granularity could be the following: CL> CL> pro: may yield many conformant implementations since only a limited number CL> of features would have to be implemented and thus effort for implementation might be low CL> cons: may yield confusion amongst tool users/buyers since they cannot easily know that a CL> conformant tool really fits their i18n/l10n requirements CL> CL> One approach to come up with a more coarse grained granularity of course could CL> start from clustering/partioning features, and basing conformance on clusters. Example: CL> CL> Definition for Cluster A CL> CL> - data categories 'ruby' and 'directionality' CL> - only local rules CL> CL> Conformance Clause CL> CL> - An implementation of this standard is profile-1 conformant if it implements all CL> features defined in Cluster A CL> CL> This seems to be an approach taken by other standards (they seem to use terms like CL> "level", or "profile"). CSS 1 from my understanding for example had two clusters: CL> core features and extended features (see http://www.w3.org/TR/CSS1#css1-conformance). CL> XSL-FO has three (called "basic", "extended" and "complete"; see http://www.w3.org/TR/xsl/slice8.html#conform) CL> It defines for each feature (objects and properties), whether a conformance level CL> requires its implementation or not (see http://www.w3.org/TR/xsl/sliceB.html#FO-summary, CL> http://www.w3.org/TR/xsl/sliceC.html#property-index). CL> CL> Following this line of thinking, we would need to decide on two things with regard to conformance: CL> CL> 1. Do we go for several different types of conformance? CL> 2. How do we possibly partition data categories, support for selection mechanisms etc. to arrive at different types? We still have to decide if we want to allow processors that implement only in-situ rules to be compilant or not. We need to decide this soon. For the test cases, based on Felix and Christian's ideas, maybe we could have something for each data category that look like this: 1. In schema 1.1 XML DTD 1.2 XML Schema 1.3 RELAX NG 2. Dislocated 2.2 External to the document 2.3 Within the document 3. In situ 4. Combination of all cases For each of these lines we would have: - The description of the test. (With a reference to the clause in the specification). At least one test set that would have: - An "Input files" entry with the list of all the input files required, for example a source XML document and a document containing dislocated rules. - An "Expected Result" entry with a document hand-made (or at least hand-checked) that describes the expected output. - Zero, one or more result files generated from the various implementations we will have. (and hopefully will will have at least one example of for each case). See the translatability data category for an example: http://www.w3.org/International/its/tests/#Trans_DislocatedExternal (I'm missing still the clause references) It would probably be good to have several test sets in some cases, for example; avec namespaces, without namespace, etc. In addition to decide if this is a good approach and how it can be improved, we should also maybe make the general layout easier to manipulate, for instance by having the Test Suite document broken down in several files (one per data category) so several people can work on different parts at the same time. Maybe the result document should be integrated within the test suite document to make it easier to look at, etc. For the test implementation we should try to make them generic enough so they can be used regardless of the input files. ...I am sure you have plenty of ideas. Cheers, -yves
Received on Wednesday, 15 February 2006 10:21:51 UTC