Re: [HTMLWG] CfC: Adopt "Plan 2014" and make some specific related decisions

On 10/22/2012 04:10 PM, Boris Zbarsky wrote:
> On 10/22/12 7:27 AM, Sam Ruby wrote:
>> To turn this discussion more constructive, the problem that needs to be
>> solved is the misconception that exists that the HTML5 specification is
>> all that needs to be implemented
>
> I think that what Jonas and Henri are concerned about is a parallel
> problem, which is the misconception that if something is in a document
> found on w3c.org then it's "a spec" and needs to be implemented, tested
> for in homegrown conformance tests like html5test.com, and so forth.
> This has been a problem even for technologies that have been formally
> dropped by the W3C (e.g. WebSQL).

One solution to this might be to suck the oxygen out of the market for 
unofficial feature test pages*, by doing a better, more authoritative, 
job ourselves.

I have previously argued against making a big show of test results, and 
I still think that there is a significant danger of creating perverse 
incentives if people start creating tests not to improve implementation 
quality, but to make themselves look good or — in very sad cases — to 
make others look bad. But perhaps it is worth re-examining the issue and 
seeing if there is a path that one can tread where we get the good 
effects of more prominent reporting of test results, without the harm.

I have been vaguely pondering the notion of assigning each test a 
priority, so that an implementation that passed all the P1 tests would 
have "basic support" for a feature, and one that passed all the P1-P5 
tests would have "excellent support" for a feature, or something. That 
might provide a reasonable balance between conformance tests as a 
promotional tool — something which it is clear that the market desires, 
regardless of what we may think — and conformance tests as a way of 
actually improving interoperability.

I have several concerns with this idea. It might be a lot of work, and 
one certainly couldn't expect test submitters to do it. It might lead to 
test classification fights (but surely this would be better than people 
fighting to drop tests altogether?). A single test might fail for a P1 
reason ("there is a huge security hole") or a P3 reason ("the wrong 
exception type is thrown"). I don't know if these are insurmountable 
issues or if there is some other tack we could take across this 
particular minefield.

* Specifically those like html5ltest that are often mistaken for 
measures of goodness.

Received on Monday, 22 October 2012 15:11:54 UTC