Re: UserAgent-specific files in Web Platform Tests

Even a Web Driver extension would require browsers to expose an API, and
that costs code size for a testing only feature.

On Mon, Apr 3, 2017 at 7:58 AM, Navid Zolghadr <> wrote:

> Hi Mike,
> Just to add something to your last paragraph. I had this doc
> <>
>  prepared back then for the input injection and compared the two method
> of using WebDriver or a browser exposed API.
> Particularly the desire for having an alternative way of injecting input
> rather than only relying on WebDriver is to be able to exchange the
> isolated property of WebDriver for efficiency of injecting input. So one
> can more easily run thousands of manual tests on each and every commit. At
> the end for example WebDriverJs can be an implementation of that testing
> API (rather than the only testing API) and as long as the test page is
> using that API they can easily use either the WebDriver path or to choose
> browser exposed path if that is exposed in the browser and if they are
> interested in a better performance.
> Cheers,
> Navid
> On Mon, Apr 3, 2017 at 10:43 AM Mike Pennisi <> wrote:
>> I'm wondering about extending WebDriver for testing internals. By using
>> its
>> built-in "Protocol Extensions" mechanism [1], each specification could
>> maintain
>> its own definition of the API it required.
>> This was discussed at the "Web Platform Test Integration Convergence"
>> meeting
>> in January. The minutes from that meeting [2] suggest some additional
>> considerations that I may not fully appreciate:
>> > Security limitation: webdriver cannot do more things than the user can.
>> > Principle is that a compromised browser shouldn't have any additional
>> > ability.
>> It's not clear why this is taken as a principle, especially in light of
>> the
>> discussion here (i.e. a JavaScript API that can do more things than the
>> user
>> can).
>> > WebDriver would commit us to not using sync testing
>> I believe this is related to the fact that WebDriver is an HTTP-based
>> protocol,
>> so if tests were written from with the browser, they would need to be
>> built
>> from asynchronous Fetch requests. I'm not sure if this is a problem,
>> though.
>> Tests could be expressed with out-of-process scripts where synchronous
>> communication is possible. Even within the browser, support for async
>> functions
>> is improving rapidly [3], and this greatly reduces the cognitive overhead
>> usually associated with asynchronous programming in the browser.
>> In my mind, one of the major considerations here is security. The
>> behaviors
>> we're trying to automate are taken for granted by end users. If this
>> functionality were mistakenly enabled on the web at large, the "User
>> agent"
>> trust model would break down. It seems critical that these testing
>> capabilities
>> are only enabled in the most specific circumstances, and that they are
>> otherwise isolated from the web platform at large.
>> My concern is that if we design this functionality *too* organically, our
>> solution will be optimized for developer ergonomics and architectural
>> simplicity, and that this will be more susceptible to security
>> vulnerabilities.
>> I've read some limited criticism about the scalability of a
>> WebDriver-backed
>> solution for platform testing [4], but the traits that increase overhead
>> from
>> an efficiency standpoint (e.g. "a lot of communication across difference
>> processes") are the same traits that would discourage accidental
>> deployment.
>> I'm also interested in consolidating efforts more generally; WebDriver is
>> just
>> entering the CR phase, but it is a well-established project that was
>> created
>> to address many of the needs we're discussing here. When I consider the
>> use
>> cases for WebDriver and the use cases for the kind of JavaScript testing
>> API
>> that's being discussed here, I see a lot of overlap. I may be missing a
>> distinction, though. Are their differences in the use cases? Or would a
>> pure
>> JavaScript testing API make WebDriver obsolete?
>> This is a really important issue, so kudos to the Bluetooth team (and
>> everyone
>> else involved) for starting the discussion early!
>> [1] W3C WebDriver working draft, "Protocol Extensions"
>> [2] "Web Platform Test Integration Convergence"
>> w8JxBvD6FZSdYE
>> [3], "Async Functions"
>> [4] "Input Automation in WPT Repo"
>> 8CD0cb8TE8SbEHU8I8PZD6v58gVclNeMKrUQ/edit#heading=h.59vkrk26forv
>> On 03/29/2017 11:08 AM, Philip Jägenstedt wrote:
>> We need to figure this out for lots of specs now, and I think the
>> approach taken makes a lot of sense: specs simply define the APIs that are
>> needed to test them, it's not somebody else's problem.
>> However, I would like to go a bit further and treat this more like we
>> treat any bit of API that tests define. Tests should simply assume that the
>> APIs exist, and otherwise fail. Having stubs of the APIs could make test
>> failures more explicit, but it seems like we could do without them. It
>> could be something like:
>> async_test(t => {
>>   navigator.bluetooth.test.setLEAvailability(false);
>>   // and so on
>> });
>> If lots of tests need the same setup, one can of course put that in a
>> shared bluetooth.js that fails more gracefully than the above one-liner.
>> In order to actually make the test APIs available Chromium might need to
>> do some things in its testharnessreport.js, or perhaps provide a
>> command-line flag if we can figure out how to make it work for vanilla
>> Chrome builds. In any case, web-platform-tests would just assume their
>> presence.
>> Would that work?
>> Since we're trying to come up with a solution that can be copy-pasted
>> into other areas, there is the question of a namespace. One approach is to
>> just say that all specs are free to put their testing APIs wherever they
>> like. To be specified using Web IDL, if not implemented that way, it might
>> end up requiring a [Testing] extended attribute so that it's clear what
>> things are for testing only.
>> Another approach which I've argued for is to have a testing namespace,
>> and that all specs would put their testing stuff in a "partial namespace
>> testing", leaving us with a single object to expose or not expose.
>> At this point, I'm inclined to say we should *not* enforce a testing
>> namespace, and just see what people end up doing organically. As long as
>> the APIs are only used in web-platform-tests, making changes to harmonize
>> after the fact will be possible, if so desired.
>> Feedback from non-Chromium folks much appreciated :)
>> On Thu, Mar 16, 2017 at 5:26 AM Vincent Scheib <>
>> wrote:
>> +1
>> stubs sounds good, and if possible the stubs would throw an assert
>> pointing to instructions regarding how the platform-fakes files are
>> intended to be replaced with implementations.
>> On Wed, Mar 15, 2017 at 11:42 AM, Reilly Grant <>
>> wrote:
>> I would like to try formally specifying this for WebUSB as well.
>> On Thu, Mar 9, 2017 at 8:05 PM Matt Giuca <> wrote:
>> I love this approach! Thanks for sharing and the write-up, Gio.
>> > On the main repo that file would be empty but on the Chromium repo that
>> file would have the necessary code to fake devices in Chromium.
>> s/empty/stubs?
>> I would definitely be up for converting my navigator.share
>> <>
>> and navigator.getInstalledRelatedApps
>> <> layout
>> tests (which currently use an explicit mock of calls to the Mojo service)
>> to a standard fake interface. Since my APIs are significantly simpler than
>> Bluetooth, I might give it a shot and report back to this group. (Note
>> though that they aren't standardised yet so I'm not sure if they'd be
>> includeable in TestHarness. Still would serve as a useful case study.)
>> On Fri, 10 Mar 2017 at 14:52 Giovanni Ortuño <> wrote:
>> Hi all,
>> Some context: We, the Web Bluetooth team, are looking into upstreaming
>> our Chromium Layout Tests to Web Platform Tests. In order to test the Web
>> Bluetooth API, we are introducing a Test API that accompanies the spec and
>> allows our tests to fake Bluetooth Devices: Web Bluetooth Test
>> <>
>> .
>> Parts of this API are implemented in JS. These parts are Chromium
>> specific, e.g. how to talk with our IPC system, so it wouldn't make sense
>> to include them as resources.
>> To that extent, we would like to add a file called
>> "web-bluetooth-test.js" which would be similar to "testharnessreport.js" to
>> the testharness repo. On the main repo that file would be empty but on the
>> Chromium repo that file would have the necessary code to fake devices in
>> Chromium.
>> There are many APIs that follow a similar pattern: they define a Test API
>> surface that they use to fake behavior. Some examples include Geolocation
>> <$&l=17>,
>> Vibration
>> <>,
>> NFC
>> <>,
>> Sensors
>> <>,
>> etc. So we think it would make sense to add a folder to include all of
>> these Test APIs in, straw-man proposal: platform-fakes.
>> ./
>> ./testharness.js
>> ./testharnessreport.js
>> ./platform-fakes/web-bluetooth-test.js
>> ./platform-fakes/geolocation-test.js
>> ...
>> Do y'all think this is a good approach?
>> Let me know what you think,
>> Gio

Received on Tuesday, 4 April 2017 03:41:54 UTC