- From: Gérard Talbot <www-style@gtalbot.org>
- Date: Fri, 03 Mar 2017 16:36:32 -0500
- To: Florian Rivoal <florian@rivoal.net>
- Cc: W3C Public CSS Test suite mailing list <public-css-testsuite@w3.org>, W3C www-style mailing list <www-style@w3.org>
Le 2017-03-02 21:06, Florian Rivoal a écrit : > We suffer from a lack of review on tests. The average age of a PR on > the test repo is currently around 370 days, or 195 days even if we > only count those not marked "awaiting-submitter-response". That's way > too long. > > I think that's in part because nobody in particular feels responsible > for this. Reviewing tests that others have written and submitted a) requires time, efforts, care; you have to report your findings in an email and/or in Shepherd system, then propose solutions, constructive modifications with tact, diplomacy b) is not necessarly fun by itself, of itself: reviewing a lot of tests has also been a learning experience as I was able to see how others (more experienced test authors) build, design tests c) requires skills, experience of doing so eventually, and d) requires good understanding of the involved specification(s) to begin with. There are tests that have been submitted and reviewed and which are incorrect; overall, rare but it happened. > I do not think we can force anyone to review tests if they'd rather be > doing something else, but we could work on identifying which spec lack > test reviewers, and try to find volunteers. Volunteers: if the review process is about finding, reporting a) incorrect tests, b) tests that can not fail c) tests that do not test what they claim (or believe) to be testing d) imprecise tests, ill-designed tests and e) how to improve, rehabilitate a)-to-d) types of tests, then I think you can *not* expect volunteers to do this. > I think each spec needs a minimum of 2 test reviewers. They can be the > same as the editors, but do not have to be. I think it has to be at > least 2, because if there's only one, there's nobody to review that > person's tests, even though they are fairly likely to write some. > > So I suggest we introduce a new role, which I'll call Test Curator > (feel free to bikeshed). The primary responsibility is not to write > the entire test suite, but to ensure that reviews and merges of Pull > Requests get done in a timely manner. Secondary responsibilities could > include: being able to identify which are of the test suite need more > work (or to declare it complete) Assessing coverage of specification by tests in a test suite is a rather very difficult task. It would imply to start by enumerating all testable statements for each section of a specification and then create sufficient tests for each testable statements (worth testing) of each section of specification. Complete, thorough test coverage of a specification is a never ending work-in-progress, I'd say. > or checking if existing tests are > still valid after normative changes in the spec. > > I suggest that on each spec: > > * we should list the "Test Curators" , next to and separately from the > Editors. > > * Ask Editors if they're willing to be a Test Curator as well. If they > are, list them as such. > > * For Every spec that has less than 2 test curators, call for > volunteers > > * Have the chairs periodically (monthly?) check if more people need to > be appointed (in addition or instead) for specs which still have a > large queue of Pull Requests. This should be take as seriously as > finding new Editors for a spec when the previous/current ones have > left or aren't keeping up. > > Note that this is separate from the (poorly named) OWNERS file used in > the web-platform-test repo, which is merely a notification mechanism, > with no implied responsibility. > > —Florian Your test curator idea is, I believe, more about management, superintendant of overall test suite and communication with test authors. Gérard
Received on Friday, 3 March 2017 21:37:09 UTC