Re: dependencies in tests

Le 2015-06-20 08:29, Florian Rivoal a écrit :
>> On 18 Jun 2015, at 22:03, Gérard Talbot <css21testsuite@gtalbot.org> 
>> wrote:
>> 
>> A is a boolean condition of some sort; if A does not exist, then we 
>> can not check for B.
>> 
>> Not having A makes the test result undefined, unknown or makes the 
>> test not applicable.
> 
> Yes, exactly. Which is my question, how do we, in prose and in machine
> readable metadata, indicate that a test is not applicable under
> certain conditions (in this case, the condition being when vertical
> text is not supported).
> 
>> Eg. Prince version 10.2r1 is a conversion HTML-to-PDF-with-CSS 
>> web-aware application. So, tests with flags "animated" and "interact" 
>> should be avoided and do not apply to such UA and the related test 
>> results should be ignored.
> 
> Right. But there is no flag for "vertical", so how do I mark up a test
> as irrelevant?

If you know that a particular UA does not support vertical 
writing-modes, then don't take the writing-modes test suite, otherwise 
ignore such test results.

I know that Prince version 10.2r1 does not support 'writing-mode: 
vertical-lr'; so I personally just stopped taking those tests and only 
take the 'writing-mode: vertical-rl' tests with Prince v.10.2r1.

>> Yes, some tests check interaction of specifications. In general, there 
>> is not a lot of tests testing interaction of specifications.
>> 
>>> but they
>>> get pulled into both, and that means that are effectively 
>>> discouraging
>>> writing cross-spec tests, since they create (or appear to create)
>>> dependencies that weren't required by the spec itself.
>>> - Florian
>> 
>> I've read your last sentence and did not quite understand it: why are 
>> you saying "that means that are effectively discouraging writing 
>> cross-spec tests"?
> 
> Just to be clear, I'm defining "cross spec test" as a test that checks
> the interaction between 2 specs when neither spec is a prerequisite
> for the other. They interact when they are both implemented, but you
> can legitimately implement one without implementing the other.
> 
> I agree that cross spec tests are good, and that we don't have a
> enough, which is why I am a bit worried about discouraging getting
> more.
> 
> The reason I think we may be discouraging these tests is that spec
> authors and implementation vendors are common authors of tests, and
> even with the best intentions, they have an interest in moving things
> forward (i.e. along TR), be it emotional, or

I agree with you.

I think that testers from manufacturers of rendering engine (or browser) 
occasionnally create tests that they - subconsciously - want to be 
passed by their rendering engine (or browser) and so, they create easier 
(or just basic) tests for that purpose or they create tests that are not 
really and not truly testing what they believe - in good faith - their 
tests claim to be testing. Or they even do not submit tests that they 
know their rendering engine (or browser) fail.

> When you write a test that links (in the "help" meta) to 2 specs, it
> gets listed in both spec's test suites (as generated either by
> shepherd, or test harness, or the in-spec annotate.js).
> 
> When one spec is an old one that you can assume support for and the
> other isn't, this is fine. When both specs are works in progress, it
> is still a useful tests (maybe even more so), but introducing a test
> that will fail because of a spec you are not currently interested in
> into the test suite of one you're trying to move forward is an
> annoyance.

Annoyance.

> When submitting an implementation report, these failures
> should be possible to explain away if you're only implementing one of
> the two specs, but both making that claim and checking that it is
> justified is extra work.
> 
> Creating extra work for yourself regarding a specification you are not
> interested in is an incentive against writing the test. Even if it is
> not a strong one, since we don't have enough of these tests, this is
> bad.
> 
> Maybe a way out of this would be add a feature to
> testharness/shepherd/annotate.js: when giving the lists of tests for a
> particular spec, they should also present you with check boxes for
> every other spec cross referenced by the tests, and if you don't claim
> to implement these specs, then you can uncheck these and get a test
> suite with the irrelevant tests removed.

Is this worth it... I mean implementing this would mean significant 
work... and we don't have many tests that are true "cross spec test"...

> Until we get this or something similar, I guess the right thing to do
> is to make sure to reference both specs from the test so that the
> feature described above can work when we introduce it, and maybe put a
> note in prose in the test to inform human reviewers / testers.

Well, isn't it sufficient having 2 distinct <link rel="help"> pointing 
to 2 different specs?

I am enclined to think that you may be going far here. I do not see a 
big problem with failing a true "cross spec test". Furthermore, if such 
test has a "may" flag, is there really a problem?

I think there are much bigger problems with tests and test suites right 
now..

Gérard
-- 
Test Format Guidelines
http://testthewebforward.org/docs/test-format-guidelines.html

Test Style Guidelines
http://testthewebforward.org/docs/test-style-guidelines.html

Test Templates
http://testthewebforward.org/docs/test-templates.html

CSS Naming Guidelines
http://testthewebforward.org/docs/css-naming.html

Test Review Checklist
http://testthewebforward.org/docs/review-checklist.html

CSS Metadata
http://testthewebforward.org/docs/css-metadata.html

Received on Saturday, 20 June 2015 17:38:40 UTC