Re: Specifications with matching test APIs

Related to this thread "Specifications with matching test APIs"
and the earlier thread "UserAgent-specific files in Web Platform Tests"

This is a summary of a call discussing web platform testing with “Test
APIs” including James Graham (Mozilla) and Philip Jägenstedt, Reilly Grant,
Vincent Scheib (Google).

Web Bluetooth, WebUSB, and WebVR are actively seeking to develop web
platform tests with new requirements compared to existing tests. Complex
state must be configured for these features to be tested, e.g. fake
bluetooth devices. “Test APIs” are proposed as being paired with

Concerns raised and discussed include:

A) With new testing patterns we may develop unexpected influence on future
implementations. Particularly risky when only a small number of developers
are influencing the API. E.g. WebUSB which doesn’t yet have other

B) WebDriver already operates at a high level, helping avoid test APIs
being too implementation specific.

C) Test APIs that don’t work with browser’s standard shipping versions may
be a problem. E.g. they won’t run with web-developer focused testing
infrastructure, BrowserStack, Sauce, etc. And, they’re harder to run

D) Our primary goal is to have conforming web browser implementations.
Considering the goal of “Testing APIs should work for web-app developers”
is lower priority.


Test APIs should clearly indicate their purpose and scope:

- They are intended only for WPT. This means test APIs can be modified more
freely if a later implementation discovers limitations in the testing API.
We do not want additional resistance to API change due to e.g. web app
developers using the test API.

- Test APIs should be designed carefully to only use concepts from the
feature being tested, and not the details of the implementation. E.g.
garbage collection, implementation specific feature details, synchronous
responses, etc.

Full notes

On Wed, May 3, 2017 at 8:55 AM, Philip Jägenstedt <> wrote:

> On Mon, Apr 24, 2017 at 4:33 PM James Graham <>
> wrote:
>> On 22/04/17 02:28, Reilly Grant wrote:
>> > On Fri, Apr 21, 2017 at 6:11 PM Rick Byers <> wrote:
>> >
>> >> Thanks Reilly, I support continuing to move in this direction!
>> >>
>> >> Are you planning in shipping this API in release Chrome builds behind a
>> >> flag?  I think it would be reasonable to modify the WPT infrastructure
>> >> (stability_checker, dashboard) to pass a --enable-testing-apis flag
>> >> (although we might need to consider the security implications of that
>> for
>> >> the WPT infrastructure running potentially untrusted test patches).
>> But I
>> >> don't think we'd want to use content_shell (or even Chromium builds) in
>> >> that infrastructure - at least not in place of Chrome builds.
>> >>
>> >
>> > The ability to override the Mojo services provided to the renderer from
>> the
>> > renderer itself (which is how Chromium's polyfill for this API is
>> > implemented) is only available in content_shell when the
>> --run-layout-test
>> > parameter is passed. There have been discussions with the Mojo team
>> about
>> > making this available in production Chrome builds when a flag is
>> enabled.
>> > It would have to be a flag which displays the "unsupported flag,
>> security
>> > and stability will suffer" infobar because it effectively allows
>> arbitrary
>> > JavaScript to run with the privileges of the renderer.
>> This seems like:
>> * An API with unclear vendor buy-in.
>> * An test API that is (effectively) not available outside Chromium CI
>> (although it's possible to get contentshell builds it's not a bad
>> approximation to assume that no one will).
>> So I'm pretty worried about this approach. It seems like there's a high
>> chance that the test api will encode Blink implementation details, we
>> will struggle to run the tests outside your CI, and web developers who
>> want to test their USB-using website will be left to search for a
>> different solution.
> I think it's probably true that there are ways of depending on
> implementation details with a testing API that are unlikely/impossible
> using a WebDriver extension, but with either approach it would be very
> surprising if the tests didn't need any adjustment when the second
> implementer starts running them. As long as the effort to find and fix such
> problems is much lower than writing tests from scratch, is it not still a
> net win? It seems likely to me that this would be the case, especially when
> making a conscious effort to define a testing API that doesn't depend on
> implementation details.
> Until there is a second implementer showing public interest, would it help
> to contain tests like this to webusb/chromium/ or similar?

Received on Tuesday, 13 June 2017 17:52:56 UTC