Re: Knowing which tests are in the repository

On Friday, August 23, 2013 at 7:48 PM, Dirk Pranke wrote:
> Second, We have a separate manifest-ish file for marking a subset of tests as Slow, and they get a 30s timeout.
This seems like a really good solution, with just the right amount of annoyance to make developers seriously consider ways to re-write the test so it's fast enough.
> There is no build step, and no parsing of tests on the fly at test run time (except as part of the actual test execution, of course). It works well, and any delays caused by scanning for files or dealing with timeouts is a small (1-3%) part of the total test run.

That's a very valuable datapoint. The situation might be a little more complicated with some of the tests we have (e.g. reftests) which will need parsing.

That said, the requirements for a manifest building step might be different depending on the use cases.

At W3C, I have two use case which have vastly different characteristics:

1) run the tests touched by a PR on 3-4 browser engines to aid with the PR review. Saving manifest files in that case is useless.
2) nightly runs of the entire repository on > 100 browser/OS/device combinations. There, an initial parsing stage makes a lot of sense. Now, whether storing the result of that parsing stage in a manifest file is the best solution or whether keeping it in memory is better is still tbd (and might be context sensitive).

This is why I was suggesting that, although such a parsing stage is certainly useful, what you do with the output of that stage really depends on your use case. Which is why it might be best kept implementation specific.
> 
> More importantly, it's very easy to understand and very transparent.

Agreed this is important. 
> I write all this not to argue that this is way we must do things; I recognize that a lot of discussion has preceded me getting involved in this group. However, I do think we have a ways to go before the tests are being run as part of well-oiled machines by multiple browsers, and I'm trying to provide feedback based on what we've found to work well and be liked by developers.

Actually, there hasn't been a lot of conversation on this topic. Your feedback is timely. 
> 
> The main point is, the more things we can make work by convention and reasonable default, the better, and if we can extract more things at little performance cost by computers, we should do so.

Absolutely. Everything that lightens the burden of test developers is worth it.

--tobie

Received on Friday, 23 August 2013 18:17:19 UTC