Re: dependencies in tests

On 06/16/2015 04:20 PM, Florian Rivoal wrote:
> Hi,
>
> I am wondering what to do when testing a feature A that depends on a feature B if B isn't available (and is something from another spec). Should I make that a pass condition? a fail condition? Something else?
>
> Here's one concrete example:
>
> For 'cursor: text', "User agents may automatically display a horizontal I-beam/cursor (e.g. same as the vertical-text keyword) for vertical text"
>
> (This is a MAY, so the "may" flag meta flag is needed, but that's orthogonal to the question).
>
> A simple test would involve using writing-mode to make a piece of vertical text, apply the text-cursor to it, and check what it looks like.
>
> But what if the browser doesn't support writing modes?
>
> Should the text in the test says something like "Test passes if ..., fails if ..., and if there is no vertical text, skip this test."?
>
> I could not find guidance for this in the testthewebforward site.

We don't have anything set up here yet. It might be a good idea to create a new
<link> or <meta> that handles this relationship: then Shepherd could mark invalid
any passing results if the depended test isn't passed.

There are a number of things that we rely on heavily that wouldn't make sense to
markup, though: we rely on absolute positioning, fixed widths and heights, colors
and background colors, basic line-breaking (break on spaces, not between ASCII
letters), etc.

I'd recommend talking to Peter about this, since he's maintaining the test systems.

~fantasai

Received on Sunday, 5 July 2015 15:35:05 UTC