Re: Test case review

On May 10, 2011, at 1:20 PM, James Graham wrote:

> This is where I think our visions differ. I think that building a 
> good, or even good-enough, review system is enough of a challenge on its 
> own without adding a huge range of testcase management features on top of 
> it. I expect managing tests to be a *hard* problem. For example tests can 
> be identified by:
> a URL
> an pair of URLs
> a URL + a name
> 
> Any of those URLs may have a query component and/or a fragment id. So 
> there is no relationship between a particular testcase and a particular 
> file; there are a kind of cloud of files that might be assosciated with a 
> given test. Trying to work out what those files are is impossible in the 
> general case; for example they can be included by document.write("some" + 
> "resource") or similar. So it will be a huge amount of work to keep all 
> the metadata up to date even if simple cases are handled automatically. 
> Basically I think that if the test can work with incorrect metadata then 
> it is highly likely that the metadata will be wrong. So I am opposed to 
> systems that try to store lots of rarely-used metadata.

Well, my bottom line is that I'm building a system for the CSS group to meet their needs, where this makes sense to serve as the foundation for a W3C wide system, that's great and I'm happy to help as I can. I accept there are test case structures and meta data needs quite foreign to what the CSS group needs, my system will likely not deal with that, at least at first, but it should still serve basic needs. Trying to meet the detailed needs of every WG is just as hard (if not harder) than trying to meet internal needs of vendors.

> 
> I don't think that W3C needs to be in the business of regression testing 
> browsers, so I don't see why we care about automatically deleting old 
> results. Indeed I am skeptical about W3C publishing results at 
> all. When W3C does publish results it should clearly be for a single 
> revision of the browsers and a single revision of the testsuite. If newer 
> results are avaliable the old ones could be discarded wholesale or 
> archived somewhere well out of the way. It is the browser vendor's job to 
> do their own day-to-day regression tracking and I don't think we need to 
> be involved.

You misunderstand here, it's not tracking the revisions of the browsers (well, it does store the version of the browser, but it doesn't do anything with that data). What I'm talking about are revisions of the _tests_. When a test is updated, the harness ignores results stored against previous versions of the test. We find issues with the tests all the time and update them as needed.

> 
> For approved/submitted tests, I am increasingly of the opinion that using 
> different directories is likely the wrong approach. Using two branches in 
> the VCS combined with a commit-based review system seems like it has a 
> number of advantages. One could still have a w3-test.org show submitted/ 
> and approved/ directories but in order to move tests from one to the 
> other, one would simply apply commits from the submitted branch onto the 
> approved branch, rather than trying to copy files and run the risk of 
> leaving stuff behind or copying the wrong bits.

Effectively no difference here (besides, in svn copy a file to another directory is morally equivalent to a branch). If you use different branches you still run the risk of failing to merge bits that aren't obviously related. 

Received on Tuesday, 10 May 2011 21:05:34 UTC