W3C home > Mailing lists > Public > public-css-testsuite@w3.org > July 2015

Better Test Case Coding for Reviewers

From: fantasai <fantasai.lists@inkedblade.net>
Date: Sun, 05 Jul 2015 12:31:15 -0400
Message-ID: <55995BD3.1070907@inkedblade.net>
To: "public-css-testsuite@w3.org" <public-css-testsuite@w3.org>, Gérard Talbot <css21testsuite@gtalbot.org>
[CCing m.d.platform, since it might be helpful for layout tests there, too.]

I've been reviewing some tests lately, and there are three things that
would help a lot in a correct and efficient review.

#1: Good indentation.

   The contents of each block should be indented. It's much harder to
   read the style sheet if this is not true. I've seen quite a few tests
   that have this problem. :(

#2: Separating framework from variation.

   Most tests have code that sets up a framework to reveal the results
   of a test, and then code that sets up the specific situation being
   tested. The framework does not vary from test to test within a set,
   but the test declarations do.

   For example, in a test for background positioning, there will be
   rules to set up a box, e.g. width, height, border, padding, margins,
   color, background-repeat, etc. These are the framework declarations.
   They do not vary by test within the set. Then there will be rules
   that set up the test, e.g. background-position, background-origin,
   background-size. These vary by test.

   If there are gratuitous differences in formatting, ordering,
   indentation, of the framework declarations, or if the test mixes
   the framework declarations with the test declarations, then I have
   to review the entire framework for every test in the series, because
   I'm not sure what's changed.

   However, if the framework declarations are kept constant, and the
   test declarations -- the part that varies per test in the series --
   are pulled out and isolated into their own part of the style sheet,
   then I can quickly review an entire series, and I am less likely to
   make mistakes in the review.

   For example, for test file 1

     /* Framework */
     .container {
       border: solid 3px 5px 7px 2px;
       padding: 11px 17px 19px 13px;
       margin: 1em;
       background-image: url(support/swatch-green.png);
       background-repeat: no-repeat;
     }

     /* Test */
     .container {
       background-position: center;
     }

   and test file 2

     /* Framework */
     .container {
       border: solid 3px 5px 7px 2px;
       padding: 11px 17px 19px 13px;
       margin: 1em;
       background-image: url(support/swatch-green.png);
       background-repeat: no-repeat;
     }

     /* Test */
     .container {
       background-position: left;
     }

   I can quickly see, at a glance, that the framework has not changed.
   And that the only difference in the two tests is that 'center' was
   replaced with 'right' in the background-position rule.

#3: Declaration of intent

   If the test writer tells me what they're trying to test, specifically,
   in this test, rather than having me guess from the source code, I'm
   much more likely to detect when they're failing at their job, either
   by flat out not triggering the right situation at all, or by not
   checking some modes of failure. (I see both of these happen often;
   it is not a theoretical problem.)

   The CSSWG tests have a <meta name="assert"> field for this purpose,
   but, really, a genuinely useful comment in any format would help.
   We comment our functions to say what they're supposed to do, but
   for some reason we don't think it's necessary to do this for tests.
   Non-trivial tests should have a comment stating their raison d'être.

~fantasai
Received on Sunday, 5 July 2015 16:31:48 UTC

This archive was generated by hypermail 2.3.1 : Sunday, 5 July 2015 16:31:50 UTC