Re: "priority" of tests

On Wed, May 10, 2017 at 9:50 PM, James Graham <> wrote:
> On 10/05/17 20:44, Philip Jägenstedt wrote:
>> For some kind of metadata, would that be at the test level? At least I tend
>> to write one file to test (for example) everything about a constructor, and
>> that would mix the serious with the trivial in the same file. But we have
>> no mechanism for annotating the individual tests.
> So that's technically untrue. But nevertheless I don't think a metadata based system will work. Historically we have never managed to get developers as a group to add metadata at all — even getting something as basic as useful commit messages is hard — and even where individuals have been motivated to add it it has always bitrotted rather quickly.
> I believe a plan based around getting people to add vague value judgements about the importance of tests would be doomed to failure. Even if we could get people to add this data at all, it would mostly be wrong when added and then later be even more wrong (because an "unimportant test" can be "important" when it turns out that specific condition is triggered in Facebook).
> I wish this wasn't true, but I think the reality is that there just isn't a simple solution to figuring out which tests are important. Often it's possible to tell that some class of failures isn't urgent (because e.g. as Philip says you recognise that a specific failure message relates to a known error in your WebIDL implementation), but otherwise you need someone with expertise to make a judgement call.

Even without any metadata, I think there's various points of interest:
obviously if something crashes it is likely a higher priority than any
other fail, but similarly if something doesn't throw an exception then
that's probably higher priority than throwing the wrong type of

The other thing that could be used is if the dashboard tracked browser
bugs for failing tests: you could then see how other vendors have
prioritised a failing test if they also fail it (or, if you keep some
priority data around, previously have failed).

I'm not opposed to adding metadata, but I think we do need buy-in from
all vendors to actually be willing to add it to future tests: and
that, really, has always been the sticking point; people have always
wanted to use certain bits of metadata, but have never wanted to add
it to new tests they write.


Received on Thursday, 18 May 2017 16:54:17 UTC