- From: Linss, Peter <peter.linss@hp.com>
- Date: Fri, 1 Jun 2012 13:58:27 +0000
- To: James Graham <jgraham@opera.com>
- CC: "<public-test-infra@w3.org>" <public-test-infra@w3.org>
On May 30, 2012, at 10:46 AM, James Graham wrote: > > > On Wed, 30 May 2012, Linss, Peter wrote: > >> On May 30, 2012, at 4:53 AM, James Graham wrote: >> >>> For many test repositories we are using a submitted/approved directory structure. This is not working well for several reasons: >>> >>> * There are typically far more useful tests in the submitted directories than the approved directories. This is due to a general lack of time/interest in reviewing tests (I am just as guilty as anyone). I doubt this situation will change. >>> >>> * It makes it difficult for us to import the tests. Because of the way we test it is very helpful if the paths to tests remain constant (cool URIs and all that). Moving the tests around is a severe inconvenience as we have to update our metadata with the new paths. >>> >>> I suggest we go with a single-level directory structure. If people want to keep metadata about which tests have been reviewed that should not be encoded in the filesystem heirachy. I doubt we will do any better than assuming that tests are good and that implementors will file bugs when they find a bad test. >>> >> >> >> The submitted/approved split is a carryover from the CSS test repository before we had a test tracking tool. Even though we do have a tool now, we still find it somewhat useful to have a distinction between 'blessed' tests and those that are being worked on and haven't had any real kind of review. > > I haven't yet seen any evidence that this distinction has been useful for html/webapps. On the other hand I do see harm it causes as test paths/URLs change. Note that the paths/URLs don't change if you look at the output of the build process... > > If I thought this kind of distinction was useful I would suggest implementing it at the VCS level rather than at the filesystem level. > >> The expectation when other tools import the tests is that they not import directly from the source tree, but from the build output. The CSS test suites have build tools that gather all the tests in the various source directories and copy the proper tests into per-suite output directories. The build tools also generate manifest files for other systems to import the test data (and metadata) as well as human readable index files. >> >> I'm working on making the CSS build tools more generic so they can be used with other test suites. Once that's done, if you use the build output then the layout of the source repository is irrelevant to your tools. > > I don't think we want a build step. In our ideal scenario, we clone the repository, create a local branch with any patches we need to get things working in our test setup, and then update the testsuite with a simple fetch/rebase (plus whatever work is needed on our end to add any new tests). Everything that requires a QA to set up a build environment and mess around getting the output files from the build into a local repository that's not just a clone of the upstream is an impediment to keeping the tests up to date. We have a lot of experience with testsuites that are hard to update, and it all suggests that the probability of someone doing the update falls dramatically as the complexity of the operation increases. There's a fundamental problem here, while I understand that most of the suites don't (yet) use a build step, and many won't need one (or may not be able to use one), there are some test suites where a build step is _mandatory_, like the CSS suites. And that's not going to change, if anything we're getting more dependent on the build process over time. If you can't incorporate a build step into your workflow, then there are a _lot_ of tests that will be unavailable to you, like around 11,000 CSS tests (and that number is growing rapidly). (Actually if you consider each format the tests are available in, that's more like 33,000 tests.) Peter
Received on Friday, 1 June 2012 13:59:31 UTC