- From: Giuseppe Pascale <giuseppep@opera.com>
- Date: Tue, 23 Oct 2012 09:13:48 +0200
- To: public-html@w3.org, "James Graham" <jgraham@opera.com>
On Mon, 22 Oct 2012 17:11:22 +0200, James Graham <jgraham@opera.com> wrote:
> On 10/22/2012 04:10 PM, Boris Zbarsky wrote:
>> On 10/22/12 7:27 AM, Sam Ruby wrote:
>>> To turn this discussion more constructive, the problem that needs to be
>>> solved is the misconception that exists that the HTML5 specification is
>>> all that needs to be implemented
>>
>> I think that what Jonas and Henri are concerned about is a parallel
>> problem, which is the misconception that if something is in a document
>> found on w3c.org then it's "a spec" and needs to be implemented, tested
>> for in homegrown conformance tests like html5test.com, and so forth.
>> This has been a problem even for technologies that have been formally
>> dropped by the W3C (e.g. WebSQL).
>
> One solution to this might be to suck the oxygen out of the market for
> unofficial feature test pages*, by doing a better, more authoritative,
> job ourselves.
>
> I have previously argued against making a big show of test results, and
> I still think that there is a significant danger of creating perverse
> incentives if people start creating tests not to improve implementation
> quality, but to make themselves look good or — in very sad cases — to
> make others look bad. But perhaps it is worth re-examining the issue and
> seeing if there is a path that one can tread where we get the good
> effects of more prominent reporting of test results, without the harm.
>
> I have been vaguely pondering the notion of assigning each test a
> priority, so that an implementation that passed all the P1 tests would
> have "basic support" for a feature, and one that passed all the P1-P5
> tests would have "excellent support" for a feature, or something. That
> might provide a reasonable balance between conformance tests as a
> promotional tool — something which it is clear that the market desires,
> regardless of what we may think — and conformance tests as a way of
> actually improving interoperability.
>
I think this is actually a good idea. This also will also help get more
test cases into the pool without promoting all of them immediately to a
"MUST PASS" status (or dropping them all together)
> I have several concerns with this idea. It might be a lot of work, and
> one certainly couldn't expect test submitters to do it.
A coordinated, well promoted and organized testing effort is a lot of work
regardless.
I think is time for the wider W3C community to work together on this to
make testing a first class citizen (and not just something you need to get
to Rec status)
This may be a good topic for discussion at TPAC.
> It might lead to test classification fights (but surely this would be
> better than people fighting to drop tests altogether?). A single test
> might fail for a P1 reason ("there is a huge security hole") or a P3
> reason ("the wrong exception type is thrown"). I don't know if these are
> insurmountable issues or if there is some other tack we could take
> across this particular minefield.
>
There will be issues for sure, but this shouldn't stop W3C from working on
it. Because if W3C doesn't do this, other will. And we will end up with N
test-sites/specifications people will fight on.
/g
> * Specifically those like html5ltest that are often mistaken for
> measures of goodness.
--
Giuseppe Pascale
TV & Connected Devices
Opera Software
Received on Tuesday, 23 October 2012 07:14:18 UTC