Re: Stability testing of PRs

On 18/10/16 16:03, Rick Byers wrote:
> On Tue, Oct 18, 2016 at 8:18 AM, Philip Jägenstedt <foolip@google.com
> <mailto:foolip@google.com>> wrote:
>
>     On Tue, Oct 18, 2016 at 2:09 PM, James Graham
>     <james@hoppipolla.co.uk <mailto:james@hoppipolla.co.uk>> wrote:
>
>         On 18/10/16 13:03, Philip Jägenstedt wrote:
>
>             That sounds fantastic, James!
>
>
> Yes, very exciting!
>
> I see your scripts are downloading the absolute latest tip-of-tree
> browser build.  There's some risk that build will have some bad bugs.
> Perhaps it's worth running the test on both tip of tree and latest
> stable build?

The argument for using latest nightly Firefox is that often PRs are 
testing features that are only available in nightly. I think testing 
only in stable makes the feature less useful ("this test consistently 
does nothing" isn't too interesting). I used the Chrome build I did 
because it seemed like the easiest analogue to Fx nightly. If there's a 
more appropriate build we can use I'm happy to change.

I think adding stable builds will be easy, but I would prefer to do that 
in a followup.

>
>             If you've modified a common.js but no tests, will there be a
>             way to
>             manually trigger a run for the tests you happen to know are
>             affected?
>
>
>         I don't know of an easy way to provide input to travis other
>         than the source tree. One could imagine an optional map of
>         support files to tests, and some tooling to help generate it for
>         simple cases, I suppose.
>
>
>     Could the commit message include AFFECTED=foo/bar/* or some such?
>     Probably overkill to get this thing started, though.
>
>             Using the same infrastructure, would it be possible to paste
>             the test
>             results into a comment in the PR after every change?
>
>
>         Yes.
>
>
>     Wow, that would be pretty amazing. Even with browserstack, it's
>     pretty painful to summarize how a test change affects different
>     implementations.
>
>
> I'm still hoping to get some people working on this use case, but we can
> always look at extending what you've done.
>
> I had planned to use browserstack or saucelabs rather than install
> browsers manually just to keep things simpler (less to go wrong) and
> automatically support all platforms.  Your approach has the significant
> benefit of being able to get results from the absolute latest browser
> build.  Are there other reasons to prefer the additional complexity of
> downloading/installing our own browsers to test against?

Using travis has a number of advantages: it's free without having to set 
up some additional account, it's an existing part of the infrastructure, 
and we get an actual VM which allows trivially running wptrunner which 
then does all the heavy lifting. I think we could consider extending 
this approach to use something like suacelabs in the future if there is 
coverage we can't get through the VM route, but unless we are concerned 
with non-desktop browsers it feels like the flexibility of being able to 
run our own code is going to win out over the difficulty of doing a 
little work to download and extract the relevant browser versions.

Received on Tuesday, 18 October 2016 15:18:25 UTC