W3C home > Mailing lists > Public > www-style@w3.org > April 2016

Re: Towards a better testsuite

From: fantasai <fantasai.lists@inkedblade.net>
Date: Mon, 4 Apr 2016 17:19:20 -0400
To: Florian Rivoal <florian@rivoal.net>, Geoffrey Sneddon <me@gsnedders.com>
Cc: www-style list <www-style@w3.org>
Message-ID: <5702DA58.1020703@inkedblade.net>
On 03/25/2016 12:38 AM, Florian Rivoal wrote:
>
>> On Mar 25, 2016, at 02:00, Geoffrey Sneddon <me@gsnedders.com> wrote:
>>
>> That said, I think it's worthwhile to reiterate that requiring *any*
>> metadata causes friction. Tests written by browser vendors are rarely
>> a file or two which is quick to add metadata too. I know in general
>> people seem interested in using the same infrastructure to run both
>> web-platform-tests and csswg-test, which essentially requires the
>> metadata required to run the tests be identical across the two.
>
> I think you need to be very careful. Yes, removing all meta data
> lowers friction for test submission, but it increases it
> when you're on the receiving end.
>
> Presumably, when someone adds a test to a browser's private repo,
> even if there's no explicit metadata at all in the test, there
> is implicit knowledge about what this is about. Maybe in a bug
> tracker somewhere. Maybe based on the branch its being added to.
> Maybe just because the person who added it is or knows the person
> who's supposed to make it pass.
>
> But this contextual information isn't passed along when sharing the
> test with other vendors. If some other vendor syncs with a repo
> from someone else with tests in it that have 0 meta data, regularly
> they'll wake up to a test run with a few hundred failing tests,
> and no indication whatsoever what these tests are about. Depending on
> the test, finding out can be far from obvious. This is a great way
> to make sure that failing tests are ignored, or that we stop syncing.
>
> Not sure if that's better or worse, but if you have in that
> lot incorrect tests that pass even though they should really
> not, you'll wind up integrating them in your regression test suite
> (hey, these tests used to pass, we need to keep them green)
> without even being aware of it.
>
> So I'm in favor of as little meta-data as possible, but not of
> no meta data at all. As a consumer of tests, the assertion and
> the link to the related specs are very important.

I agree with Florian's comments overall, and just wanted to point
out that, as a test reviewer, the spec links and assertion are
pretty critical to figuring out one of the main three failure modes
of a test.

They are
   1. Does the test pass when it's supposed to pass?
   2. Does the test fail when it's supposed to fail?
   3. Does the test actually test the condition the test writer is
      trying to check?

I've seen test writers fail #3 fairly often.

Fwiw, I don't think it's particularly important to distinguish
between
   a) The <title>
   b) The <meta> assert
   c) HTML or CSS comments from the author on what they're trying
      to test in this file.
and I'm happy to fold all these into <title>, if that would make
things easier.

But just as a non-trivial function should describe its purpose in
life (so we can figure out what it's trying to do, whether it's
doing it correctly, and how to fix errors in it or rewrite it to
use some other implementation if necessary), so too should a
conformance test. Unlike the browser vendors who dump tests into
their automated system, we're not building a regression test
library where the main point is to not change behavior. We're
building a conformance test library, which is intended to be
reused and updated and shared and maintained.

~fantasai
Received on Monday, 4 April 2016 21:19:52 UTC

This archive was generated by hypermail 2.3.1 : Monday, 2 May 2016 14:39:38 UTC