Re: Stability testing of PRs

On Tue, Oct 18, 2016 at 8:18 AM, Philip Jägenstedt <foolip@google.com>
wrote:

> On Tue, Oct 18, 2016 at 2:09 PM, James Graham <james@hoppipolla.co.uk>
> wrote:
>
>> On 18/10/16 13:03, Philip Jägenstedt wrote:
>>
>>> That sounds fantastic, James!
>>>
>>
Yes, very exciting!

I see your scripts are downloading the absolute latest tip-of-tree browser
build.  There's some risk that build will have some bad bugs.  Perhaps it's
worth running the test on both tip of tree and latest stable build?

If you've modified a common.js but no tests, will there be a way to
>>> manually trigger a run for the tests you happen to know are affected?
>>>
>>
>> I don't know of an easy way to provide input to travis other than the
>> source tree. One could imagine an optional map of support files to tests,
>> and some tooling to help generate it for simple cases, I suppose.
>
>
> Could the commit message include AFFECTED=foo/bar/* or some such? Probably
> overkill to get this thing started, though.
>
> Using the same infrastructure, would it be possible to paste the test
>>> results into a comment in the PR after every change?
>>>
>>
>> Yes.
>>
>
> Wow, that would be pretty amazing. Even with browserstack, it's pretty
> painful to summarize how a test change affects different implementations.
>

I'm still hoping to get some people working on this use case, but we can
always look at extending what you've done.

I had planned to use browserstack or saucelabs rather than install browsers
manually just to keep things simpler (less to go wrong) and automatically
support all platforms.  Your approach has the significant benefit of being
able to get results from the absolute latest browser build.  Are there
other reasons to prefer the additional complexity of downloading/installing
our own browsers to test against?

Received on Tuesday, 18 October 2016 15:04:39 UTC