Re: Knowing which tests are in the repository

On 22/08/13 17:45, Dirk Pranke wrote:
> I mostly like this ... comments inline.
>
> On Thu, Aug 22, 2013 at 9:31 AM, James Graham <james@hoppipolla.co.uk
> <mailto:james@hoppipolla.co.uk>> wrote:
>
>     A modified proposal:
>
>     By default apply the following rules, in the order given:
>
>     * Any file with a name starting with a . or equal to
>     "override.manifest" is a helper file
>
>
> Are there helper files other than manifests that we should be worrying
> about? I'm thinking of things like .htaccess, .gitignore, etc. I would
> probably say "is not a test"  (or possibly "can be ignored") rather than
> "is a helper file".

Sure, I only reused "helper file" for this case because I couldn't think 
of a better term.

>     * Any file with -manual in the name before the extension is a manual
>     test.
>
>     * Any html, xhtml or svg file that links to testharness.js is a
>     testharness test
>
>     * Any html, xhtml or svg file that has a file with the same name but
>     the suffix -ref before the extension is a reftest file and the
>     corresponding -ref file is a helper file.
>
>     * Any html, xhtml or svg file that contains a link rel=match or link
>     rel=mismatch is a reftest file.
>
>
> Strictly speaking, one could say that -manual is unneeded, but since I'd
> prefer to stomp out as many manual tests as possible, I'm fine w/ making
> their names be uglier (and I do also like the clarity the naming provides).

I don't see how else you would distinguish manual tests and helper files.

> Is it too much to ask that we have similar names for either testharness
> tests or reftests so that you can distinguish which a test is without
> having to open the file? /me holds out a faint hope ...

I think it's too much effort to require that all testharness.js tests 
have something specific in the filename. Reftests have to be parsed to 
work out the reference anyway.

>     * Any other file is a helper file.
>
>     These rules can be overridden by providing an override.manifest
>     file. Such a file can contain a list of filenames to exclude from
>     the normal processing above and a list of urls for tests, similar to
>     my previous proposal. So for example one might have
>
>     [exclude]
>     foo.html
>
>     [testharness]
>     foo.html?subset=1
>     foo.html?subset=2
>
>     I am still not sure how to deal with timeouts. One option would be
>     to put the overall timeout in a meta value rather than in the
>     javascript, since this will be easier to parse out. For tests where
>     this doesn't work due to strong constraints on the html, one could
>     use the override.manifest as above (and also specify the timeout in
>     the js). I can't say I am thrilled with this idea though.
>
>
> Ignoring the issues around query-param based tests and timeouts, is
> there a reason we'd want to allow exceptions at all apart from the fact
> that we have a lot of them now? I.e., I'd suggest that we don't allow
> exceptions for new tests and figure out if we can rename/restructure
> existing tests to get rid of the exceptions.

The point of the exceptions is only the issues around query params and 
other exceptional circumstances. The point is not to allow deviations in 
cases that could conform to the scheme, but to allow flexibility where 
it is really required. Since we already have cases where it is really 
required, and the people who require it are typically advanced test 
authors, this seems quite acceptable.

> As far as timeouts go, I'm still not sold on specifying them at all, or
> at least specifying them regularly as part of the test input. I'd rather
> have a rule along the lines of "no input file should take more than X
> seconds to run" (obviously, details would qualify the class of hardware
> and browser used as a baseline for that). I'd suggest X be on the order
> of 1-2 seconds for a contemporary desktop production browser on
> contemporary hardware. I would be fine w/ this being a recommendation
> rather than a requirement, though.

Well, there are a lot of issues here. Obviously very-long-running tests 
can be problematic. On the other hand, splitting up tests where they 
could be combined creates a lot of overhead during execution. More 
importantly, some tests simply require long running times. It isn't 
uncommon to have tests that delay resource loads to ensure a particular 
order of events, or similar. Tests like these intrinsically take more 
than a few seconds to run and so need a longer timeout.

I don't think we can simply dodge this issue.

Received on Thursday, 22 August 2013 16:55:35 UTC