Re: Some proposals on Test Review Checklist and Test Style Guidelines

Hi Gérard,

I just mentioned this to you in a private thread on a similar topic, but
I’m repeating it here for the benefit of the list.

All of the test documentation on can be modified by
anyone on Github:

If you see something there that needs correction or enhancement, it is
highly preferred that you fork the repo, make the edits, and send a pull
request (PR). When the PR is submitted, it¹s very easy to add an
explanation of the changes so the reviewer(s) have a clear understanding.

One of the many reasons why this is preferred is that your suggested edits
can be reviewed inline next to the original content. Additionally, those
who are interested in doc changes can subscribe and receive notifications
from the PR and it’s subsequent review activity. This saves everyone a lot
of typing and reading emails that are detached from the actual content.
Here’s an example of how it looks and how nicely the review comments can
be made inline.

More info on updating the docs is here:

And specifics on how to fork and do a pull request (although rather than
the test repo, use the testtwf-website repo instead):

Feel free to make these suggested changes in a single pull request or in
separate ones if you see fit.  One tip: If you want someone specifically
to review your change, you can ask them when you submit the PR by using
their github handle. For example:

“@rhauck or @plinss - can you please review this?”  That way, if that
person is not subscribed, github will send them a notification so you
needn’t send a separate email. Whether or not they're subscribed, this is
very common practice when soliciting a review from someone specific.

Let me know if you have any other questions or comments about this new

Thanks for all your help!


On 4/15/14, 3:56 AM, "Gérard Talbot" <> wrote:

>Test Review Checklist
>I believe this page may be source of confusion.
>"All tests" is supposed to mean a) non-self-describing tests and b)
>self-describing tests;  each ( a and b ) categories of tests could be of
>type manual (not a reftest) or automatable (a reftest).
>When you refer to "Reftests", I believe you mean tests that have an(or
>several) associated reference file(s). But this may not be what people
>would think...
>Here's my proposal:
>All tests
>The test passes when it's supposed to pass.
>The test fails when it's supposed to fail.
>The test is testing what it thinks it's testing.
>The spec backs up the expected behavior in the test.
>The test is automated as either reftest or a script test unless there's
>a very good reason why the test must be manual.
>The test does not use external resources.
>The test does not use proprietary features (vendor-prefixed or
>The title is descriptive but not too wordy.
>The test is as cross-platform as reasonably possible, working across
>different devices, screen resolutions, paper sizes, etc.
>Self-describing tests
>The self-describing statement is clear, short and self-explanatory. Your
>mother/husband/roommate/brother/bus driver should be able to say whether
>the test passed or failed within a few seconds, and not need to spend
>several minutes thinking or asking questions.
>Reference file only
>The reference file is accurate and will render pixel-perfect identically
>to the test on all platforms.
>The reference file uses a different technique that won't fail in the
>same way as the test.
>Script Tests Only
>no change; same as now
>In depth Checklist
>no change; same as now
>Note that I am also proposing some changes.
>"The self-describing statement is accurate, precise, simple, and
>is now
>"The self-describing statement is clear, short and self-explanatory."
>Also, I propose to remove
>"If there are limitations (e.g. the test will only work on 96dpi
>devices, or screens wider than 200 pixels), then these are documented in
>the instructions."
>If a test is supposed to only work in a 96dpi device or paper media only
>or etc.., then test creators only need to use accordingly this list
>This example
>coming from
>is not best (and that's my fault!) because green is used without red in
>case of a failure. I would need to change those
>border-bottom-applies-to-* tests so that they would use
>border-bottom-width-applies-to-* tests. At the same time, they would
>reuse the same reference files.
>Test Style Guidelines
>"This line should ..."
>I am for replacing "line" by "sentence" or by "text" and to keep "line"
>for linear test situations.
>I am also strongly for systematic usage of the "Test passes if ..."
>introductory words in every examples of self-describing sentences.
>I am for replacing
>"Test passes if there is a green square and no red."
>"Test passes if there is a filled green square and *no red*."
>is already referenced by 149 tests and there is no reason why it would
>not or could not be referenced by thousands of tests.
>" on this page"
>in self-describing sentences can be safely removed. Same thing with
>"you can see"
>"You should see"
>"you can view"
>"in this page"
>"on this page"
>"below this line"
>"after this line"
>"below this sentence"
>"after this sentence"
>"below this paragraph"
>"under this paragraph"
>"in the next paragraph"
>"after this"
>"which follows"
>If self-describing tests all start with the recommended "Test passes if
>..." and if testers are assumed to not be blind, then all these
>expressions can safely be removed.
>d) "Filler text" should be preferred for page background; "Text sample"
>should be preferred when a text is being the object of the test.
>Web authors' contributions to CSS 2.1 test suite
>CSS 2.1 Test suite RC6, March 23rd 2011

Received on Monday, 21 April 2014 15:52:34 UTC