W3C home > Mailing lists > Public > public-i18n-its@w3.org > January to March 2006

RE: On conformance

From: Lieske, Christian <christian.lieske@sap.com>
Date: Wed, 15 Feb 2006 11:15:13 +0100
Message-ID: <0F568FE519230641B5F84502E0979DD1048F7751@dewdfe12.wdf.sap.corp>
To: "Yves Savourel" <yves@opentag.com>, <public-i18n-its@w3.org>

Hello everyone,

Please find my comments below (starting with "CL>").

Best regards,

-----Original Message-----
From: public-i18n-its-request@w3.org
[mailto:public-i18n-its-request@w3.org] On Behalf Of Yves Savourel
Sent: Sonntag, 12. Februar 2006 06:51
To: public-i18n-its@w3.org
Subject: RE: On conformance

Hi Christian, Felix, and all,

> So I think you should provide all tests which you 
> think which are necessary, not only the ones for 
> "terminology". This might be a very complicated task,
> *if* you assume a lot of conformance levels, and 
> even conformance specific conformance criteria to a 
> single data category.

Our data categories are quite divers: Ruby as little do to with
translatability for example. This means it probably make sense for
the applications that will implement ITS to provide support for only
some of the data categories.

CL> Or only provide _limited_ support (cf. the discussion on

For example a translation tool would implement the translatability data
and localization information categories but completely
ignore terminology.

CL> I am not sure that all translation tools would do that.

 Therefore I think we have to test the 6 data categories separately (I
think <its:span> is something different
and can be tested with along with all the in situ cases).

>From the "rules location" viewpoint we have: in XML DTD, in XML Schema,
in RELAX NG, external dislocated, internal dislocated, and
in situ... 6 cases. In addition, I think it's important to also have
test cases for each data category where all the different
"rules locations" are combined. So 7 cases.

This gives us the following matrix:

Which is ... 42 cases overall (although there maybe a few cases less as
not all types of rules location apply to all data

I think it's important that we provide at least one standalone test case
for each of these combinations. It is quite a bit of work,
but it is probably the only way to ensure ITS is sound. 

As far as "processors" *compliance*. I think we don't have to define a
level for each case. Maybe we can say that an application is
ITS compliant when it implement sucessfully at least one of the data
categories(?) and that it should state which one(s) with any
compliance claim.

CL> I like Yves' approach of distinguishing between test cases and
conformance/compliance. From
CL> my point of view test cases can help with the following:
CL> 1. verify that the framework adequately addresses an issue
CL> 2. possibly help with the definition of conformance
CL> 3. testing conformance
CL> I think that the design of the test suite (that is the collection of
test cases instrumented with
CL> input, output, id etc.) which Yves has drafted is very promising.
CL> I am still not sure about the granularity of conformance we should
be aiming at. Possible pros and
CL> cons for a fine grained granularity could be the following:
CL> pro: may yield many conformant implementations since only a limited
CL> 	of features would have to be implemented and thus effort for
implementation might be low
CL> cons: may yield confusion amongst tool users/buyers since they
cannot easily know that a
CL>	conformant tool really fits their i18n/l10n requirements
CL> One approach to come up with a more coarse grained granularity of
course could
CL> start from clustering/partioning features, and basing conformance on
clusters. Example:
CL> Definition for Cluster A
CL>	 - data categories 'ruby' and 'directionality'
CL> 	 - only local rules
CL>  Conformance Clause
CL>    - An implementation of this standard is profile-1 conformant if
it implements all
CL>      features defined in Cluster A
CL> This seems to be an approach taken by other standards (they seem to
use terms like
CL> "level", or "profile"). CSS 1 from my understanding for example had
two clusters:
CL> core features and extended features (see
CL> XSL-FO has three (called "basic", "extended" and "complete"; see
CL> It defines for each feature (objects and properties), whether a
conformance level
CL> requires its implementation or not (see
CL> http://www.w3.org/TR/xsl/sliceC.html#property-index).
CL> Following this line of thinking, we would need to decide on two
things with regard to conformance:
CL>	1. Do we go for several different types of conformance?
CL>   2. How do we possibly partition data categories, support for
selection mechanisms etc. to arrive at different types?

We still have to decide if we want to allow processors that implement
only in-situ rules to be compilant or not. We need to decide
this soon.

For the test cases, based on Felix and Christian's ideas, maybe we could
have something for each data category that look like this:

1. In schema
	1.1 XML DTD
	1.2 XML Schema
2. Dislocated
	2.2 External to the document
	2.3 Within the document
3. In situ
4. Combination of all cases

For each of these lines we would have:

- The description of the test. (With a reference to the clause in the

At least one test set that would have:

- An "Input files" entry with the list of all the input files required,
for example a source XML document and a document containing
dislocated rules.

- An "Expected Result" entry with a document hand-made (or at least
hand-checked) that describes the expected output.

- Zero, one or more result files generated from the various
implementations we will have. (and hopefully will will have at least one
example of for each case).

See the translatability data category for an example:
(I'm missing still the clause references)

It would probably be good to have several test sets in some cases, for
example; avec namespaces, without namespace, etc.

In addition to decide if this is a good approach and how it can be
improved, we should also maybe make the general layout easier to
manipulate, for instance by having the Test Suite document broken down
in several files (one per data category) so several people
can work on different parts at the same time. Maybe the result document
should be integrated within the test suite document to make
it easier to look at, etc.

For the test implementation we should try to make them generic enough so
they can be used regardless of the input files.

...I am sure you have plenty of ideas.

Received on Wednesday, 15 February 2006 10:21:51 UTC

This archive was generated by hypermail 2.4.0 : Friday, 17 January 2020 16:04:08 UTC