Re: dependencies in tests

> On 18 Jun 2015, at 22:03, Gérard Talbot <css21testsuite@gtalbot.org> wrote:
> 
> A is a boolean condition of some sort; if A does not exist, then we can not check for B.
> 
> Not having A makes the test result undefined, unknown or makes the test not applicable.

Yes, exactly. Which is my question, how do we, in prose and in machine readable metadata, indicate that a test is not applicable under certain conditions (in this case, the condition being when vertical text is not supported).

> Eg. Prince version 10.2r1 is a conversion HTML-to-PDF-with-CSS web-aware application. So, tests with flags "animated" and "interact" should be avoided and do not apply to such UA and the related test results should be ignored.

Right. But there is no flag for "vertical", so how do I mark up a test as irrelevant?

> Yes, some tests check interaction of specifications. In general, there is not a lot of tests testing interaction of specifications.
> 
>> but they
>> get pulled into both, and that means that are effectively discouraging
>> writing cross-spec tests, since they create (or appear to create)
>> dependencies that weren't required by the spec itself.
>> - Florian
> 
> I've read your last sentence and did not quite understand it: why are you saying "that means that are effectively discouraging writing cross-spec tests"?

Just to be clear, I'm defining "cross spec test" as a test that checks the interaction between 2 specs when neither spec is a prerequisite for the other. They interact when they are both implemented, but you can legitimately implement one without implementing the other.

I agree that cross spec tests are good, and that we don't have a enough, which is why I am a bit worried about discouraging getting more.

The reason I think we may be discouraging these tests is that spec authors and implementation vendors are common authors of tests, and even with the best intentions, they have an interest in moving things forward (i.e. along TR), be it emotional, or 

When you write a test that links (in the "help" meta) to 2 specs, it gets listed in both spec's test suites (as generated either by shepherd, or test harness, or the in-spec annotate.js).

When one spec is an old one that you can assume support for and the other isn't, this is fine. When both specs are works in progress, it is still a useful tests (maybe even more so), but introducing a test that will fail because of a spec you are not currently interested in into the test suite of one you're trying to move forward is an annoyance. When submitting an implementation report, these failures should be possible to explain away if you're only implementing one of the two specs, but both making that claim and checking that it is justified is extra work.

Creating extra work for yourself regarding a specification you are not interested in is an incentive against writing the test. Even if it is not a strong one, since we don't have enough of these tests, this is bad.

Maybe a way out of this would be add a feature to testharness/shepherd/annotate.js: when giving the lists of tests for a particular spec, they should also present you with check boxes for every other spec cross referenced by the tests, and if you don't claim to implement these specs, then you can uncheck these and get a test suite with the irrelevant tests removed.

Until we get this or something similar, I guess the right thing to do is to make sure to reference both specs from the test so that the feature described above can work when we introduce it, and maybe put a note in prose in the test to inform human reviewers / testers.

 - Florian

Received on Saturday, 20 June 2015 12:29:31 UTC