Re: Tests that fail due to server problems

Thanks! I understand and agree with the points about 3rd-party
dependencies. Unfortunately we have no choice in this case.

I guess if there are no assertions that's an indication that nothing was
tested. That makes sense in our use-case., though it may be awkward to
defer all the assertions until after we determine the server status.

...Mark

On Tue, Sep 13, 2016 at 4:19 PM, Shane McCarron <shane@spec-ops.io> wrote:

> The way we are looking to avoid these sorts of dependencies is by building
> a skeletal server for various things into WPT itself.  Which works for some
> small set of problems, but not all of them I am afraid.
>
> Personally I would prefer a result of NOTRUN over FAIL.  As to how you can
> achieve that...  I think that if you establish an async_test at the
> beginning and then never have any assertions associated with that test, and
> use explicit_done: true you can call "done" after determining the remote
> server is unavailable and the async_test will end up in the report with a
> value of NOTRUN.
>
> This might not be completely kosher you understand.  I am not up on all
> the philosophy.
>
> On Tue, Sep 13, 2016 at 6:12 PM, Geoffrey Sneddon <me@gsnedders.com>
> wrote:
>
>> On Tue, Sep 13, 2016 at 11:53 PM, Mark Watson <watsonm@netflix.com>
>> wrote:
>> > We have some tests (for encrypted-media) which rely on a 3rd party
>> server.
>> > Presently, if that server fails then the tests report TIMEOUT, which is
>> then
>> > indistinguishable (for some of the tests) from certain kinds of test
>> failure
>> > (for example, video was expected to start but never did).
>> >
>> > What is the appropriate result when the inability to complete the test
>> is
>> > due to such a 3rd party dependency, rather than a problem with the
>> > implementation under test ? Should this be NOTRUN ? How do we trigger
>> this ?
>>
>> Is there any way to avoid the 3rd party dependency? Typically browser
>> vendors have attempted to avoid 3rd party dependencies in tests at all
>> costs given it leads to a substantial increase in intermittent
>> failures, and tests that fail intermittently typically get disabled,
>> especially when they cannot be fixed. (You can't gate landing a patch
>> on tests passing if they fail intermittently 0.01% of the time, as
>> with a large number of tests it becomes a significant percentage of
>> test runs that end up with some intermittent failure, hence you end up
>> unable to land any patch.)
>>
>> As a result, practically the tests are going to end up being manual
>> tests regardless of automation or not. As such, I'd suggest what the
>> actual status returned is is relatively irrelevant.
>>
>> I presume from your question that there's some way to detect when the
>> 3rd party server fails to respond—I guess what I'd suggest is having
>> the tests fail if the server fails to respond, with a description
>> saying as much (and probably saying to try running it again).
>>
>> /gsnedders
>>
>>
>
>
> --
> Shane McCarron
> Projects Manager, Spec-Ops
>

Received on Wednesday, 14 September 2016 00:46:11 UTC