- From: <ishida@w3.org>
- Date: Sat, 9 Apr 2016 13:58:55 +0100
- To: fantasai <fantasai.lists@inkedblade.net>, Florian Rivoal <florian@rivoal.net>, Geoffrey Sneddon <me@gsnedders.com>
- Cc: www-style list <www-style@w3.org>
On 04/04/2016 22:19, fantasai wrote: >> So I'm in favor of as little meta-data as possible, but not of >> no meta data at all. As a consumer of tests, the assertion and >> the link to the related specs are very important. > > I agree with Florian's comments overall, and just wanted to point > out that, as a test reviewer, the spec links and assertion are > pretty critical to figuring out one of the main three failure modes > of a test. > > They are > 1. Does the test pass when it's supposed to pass? > 2. Does the test fail when it's supposed to fail? > 3. Does the test actually test the condition the test writer is > trying to check? Another big vote of support for the idea that *some* metadata is important. At the very least, some statement of *precisely* what the test writer was expecting to test is essential for people who want to work with the test afterwards, or check for coverage, etc. I can't count the number of times that it has taken me a good while poring over the code to figure out whether the test i was looking at fits what i was looking for, or why it failed. Sometimes, i discover that the tester only had a vague idea of what they were testing. If they had had an assertion to work to (a) i'd have understood that much sooner, and (b) they might have made a better focused test. If you want to improve the quality of tests, i think you need to make it easier for (a much smaller number of) people to work with and evaluate the tests being submitted, rather than to increase the number of tests while making it harder to review them. What i would really like to see, though, being a person who tends to create a significant number of tests at a time but has almost no time to dedicate to that activity, is less insistence on minor formatting requirements - and especially slavish obedience to the automated checkers. I recently had to rework a bunch of tests and resubmit the PR because of things like spaces at the end of a line *inside a comment*. I did it, but seriously considered just abandoning the effort, since it meant using up the small amount of time i try to reserve for family life. I couldn't help wondering why i had to do busy work that didn't affect the test. It ok to have automated tools looking for errors, but if its not clear that those errors are really going to break the test, lets not just reject the tests. I actually develop tests for use elsewhere, but in a way that makes it possible to reuse them for webplatform or css repos. However, i often think that there is little allowance made for people might want to create such efficiencies. Sometimes the rules about metadata seem too myopic, and sometimes people make sweeping changes to unimportant aspects of large numbers of files, which means that it's no longer possible to quickly ascertain which files have had been change in unimportant ways and which files have had substantive changes made, changes that i need to reintegrate into my original set of tests. By the way, while i'm making suggestions, one other thing i'd like to change is the dumping of huge numbers of tests in a single high level directory without further structure. For example, the CSS Writing Modes spec has around 1000 tests that are all in one directory. It would be much easier to find specific tests and determine coverage, etc. if the tests were in a directory structure that more closely matched the structure of the document (at least to some degree). hope that helps, ri
Received on Saturday, 9 April 2016 12:59:05 UTC