Re: Vendor harnesses

On May 10, 2011, at 2:04 PM, James Graham wrote:

> 
> To be clear, my opinion is:
> 
> Goal:
> Make all testcases usable (and, therefore *used*) by vendors
> 
> Non-goal:
> Make the actual software vendors use to run the test cases

Well, I'm not in the business of building testing tools for vendors here either. I just don't want to see vendors investing in private testing efforts and ignoring the testing needs of the W3C. To the extent we create synergy, we foster cooperation. I'm sick of hearing from vendors saying they don't have the resources to help with the W3C testing efforts because they're spending it all on their internal testing and then don't even submit an implementation report.

> 
>> When I talk about my goal here (and I did say it was a personal goal), 
>> what I mean is that over time, I'd like to see the W3C system evolve to 
>> the point that it can serve the same needs as what vendors need for 
>> their internal testing. Ideally, if a new vendor comes to the scene down 
>> the road, they'll be able to simply adopt the W3C testing system rather 
>> than roll their own. And while I'm not expecting existing vendors to 
>> drop their own systems and switch to the W3C's, I do want to see the 
>> systems converge, so that at least where there are overlaps in 
>> functionality, the same tools can be used. 
> 
> OK, it's fair enough if you have that as a personal goal, but I strongly 
> feel that it isn't a good use of the group's time to work on it. I like 
> simple things that will have obvious short-term benefits.

I wasn't proposing spending group time on this aspect. In fact, I'd be opposed to that myself. I don't want to get bogged down on those issues now, but that doesn't mean that I wont be thinking about it. 

> 
> 
>> I don't want to see the W3C testing system try to adapt to every 
>> vendor's proprietary testing hooks, but it would be good if the W3C's 
>> testing system was extensible so that vendors could write their own 
>> adapters to connect their testing hooks to the W3C harness, for example. 
>> Even better, would be for browser vendors to agree on a standardized 
>> testing API so that any browser can integrate into a standard testing 
>> harness.
>> 
>> Let me give a concrete example, with Firefox, you can make a special 
>> build of the browser that can automatically compare reference tests and 
>> gather results. Wouldn't it be useful if that build could run the W3C's 
>> testing harness directly and submit results there?
> 
> Why? I am generally not interested in the W3C collecting results and 
> haven't really understood why other people are. Test results are only 
> really useful for QA; you can see where you have bugs that you need to fix 
> and ensure that you don't regress things that used to work. But that's a 
> very vendor specific thing; it's not something that W3C has to do. When 
> people try to use tests for non-QA purposes like stating that one browser 
> is more awesome than another it leads to bad incentives for people 
> submitting tests.

It's called transitioning from CR to PR. The working groups need test result data in order to advance specs. That's the only reason the CSS wg spent years building a test suite in the first place, without the result data the tests are useless to the wg. Frankly the only reason I spent so much of my own time building the harness was to track testing coverage and to generate the implementation report for CSS 2.1.

Received on Tuesday, 10 May 2011 21:20:51 UTC