Re: Conformance and Implementations

Sorry for hooking up on the thread this late.

In general, I'd say that it is near impossible to introduce conformance 
metrics, no matter how advanced a conformance section you may have in 
your specification. It is for this reason that we in the DOM TS 
mentioned here and there on this and other lists, and in the DOM TS 
Process Document in particular, use the negative approach: passing our 
test suite means nothing except for "OK" conformance, failing even one 
test means non-compliance.

I've read most conformance section around in W3C specs and my firm 
belief is that no existing TS, and not the DOM TS in particular, 
regardless of its monolithic approach with licences, public development, 
WG coordination, documented test cases in code and prose, and automated 
test file generation into many language bindings, can test full 
compliance. This I believe happens for three reasons:

1. It is not clear what conformance means to begin with
2. It is not clear how much of every single aspect/facet needs to be 
tested to be able to deliver a meaningful result (the DOM is 
combinatorially huge), and I believe this to be the biggest problem.
3. It is not clear who (normatively speaking) does the best job in 
interpreting the specifcation in question ((which is why the DOM TS ML 
Schema is generated directly from the DOM specs). Is it the WG who wrote 
the spec? Is it a trusted third party? Is it the member companies? I 
believe this to be the most serious problem.

So of course we end up in the situations that you have already 
described: politics (and of course [ab]use of market forces and 
mechanisms).

The thing I would personally like to have discussed is this: What is the 
W3C _official_ position on testing, interdependancy and conformance? 
Given the fact, for example, that the DOM TS has not officially been 
used as an experience tank for subsequent test suites (at least not to 
my knowledge), thus not coordinating this with later efforts, should 
there perhaps be a clause in the specs that indicates how test suite 
should be written in a coordinated fashion? I'm afraid we may be missing 
the forest for the trees by moving in too many directions at once 
eventually ending up with incompatible testing frameworks.

I'm certainly hesitant to give an answer to what degree this product or 
another is conformant with this or that specification, which is 
unfortunate since

a. I should know, since I was part of the group who did the test suite
b. It's part of what I do for a living

But then again, I don't have the (formal) tools to say anything, since 
the test suite has been designed to help implementors increase their 
support for this technology, This is a rational constraint, but can in 
the future well be a stopper.

Another problem is metrics: as far as I know there is no formal 
relevance metrics framework, with whatever connotations 'relevance' has. 
I've indicated this in a reply to Sean's email on EARL and related 
technologies, and I think it may be a partial answer to why we can't 
answer the question on compliance to begin with. is there any work on 
this kind of measurement around?

Again, sorry for jumping on the train this late, but I thought it could 
be useful for me to share my views having been working with the DOM TS 
for 7 months now.

Kind regards,

/Dimitris


On Tuesday, October 9, 2001, at 11:49  PM, Rob Lanphier wrote:

> At 02:32 PM 10/9/01 -0600, Lofton Henderson wrote:
>> At 11:11 AM 10/9/01 -0600, Alex Rousskov wrote:
>>>     10. Big companies start arguing that their competitors are
>>>         abusing test suites. They also realize the opportunity
>>>         to narrow down the competition using legal barriers.
>>>     11. W3C designates an independent 3rd party to administer the
>>>         tests. Now the testing is fair and impartial.
>>>     12. An independent 3rd party charges $$$ to verify conformance,
>>>         blocking the way for small developers to make any legal
>>>         conformance claims
>>
>> Anecdote.  For many years, I ran a small (5-person) technology 
>> company.  A trade associate (ATA) said they wanted their suppliers 
>> certified (a certification service was available from a 3rd party).  
>> We were the first company to get certified, thinking of it as a 
>> competitive advantage.  I actually think that peoples' willingness to 
>> get certified was in inverse proportion to the company size.
>
> I agree with Lofton's hunch here.  The bigger the company, the easier 
> it is for them to unilaterally define what FooML is.  Small companies 
> without brand recognition are usually the ones that need to bootstrap 
> their brand off of the W3C or like body.
>
> Rob
>

Received on Thursday, 18 October 2001 15:38:43 UTC