Re: Vendor harnesses

On Tue, 10 May 2011, Linss, Peter wrote:
>> Why? I am generally not interested in the W3C collecting results and
>> haven't really understood why other people are. Test results are only
>> really useful for QA; you can see where you have bugs that you need to fix
>> and ensure that you don't regress things that used to work. But that's a
>> very vendor specific thing; it's not something that W3C has to do. When
>> people try to use tests for non-QA purposes like stating that one browser
>> is more awesome than another it leads to bad incentives for people
>> submitting tests.
>
> It's called transitioning from CR to PR. The working groups need test 
> result data in order to advance specs. That's the only reason the CSS wg 
> spent years building a test suite in the first place, without the result 
> data the tests are useless to the wg. Frankly the only reason I spent so 
> much of my own time building the harness was to track testing coverage 
> and to generate the implementation report for CSS 2.1.

The failure of the CSS2.1 testsuite wasn't that it was so hard to get 
people to create implementation reports; that was merely a symptom. The 
failure was that the tests weren't being run on a day-to-day basis by 
browser vendors long before attempting to transition to PR. That's the 
problem that we need to solve. Once you have people using the tests for 
real work, W3C Process stuff falls out as a happy side effect.

Received on Wednesday, 11 May 2011 07:00:09 UTC