W3C home > Mailing lists > Public > public-test-infra@w3.org > April to June 2017

Re: UserAgent-specific files in Web Platform Tests

From: Philip Jägenstedt <foolip@google.com>
Date: Tue, 11 Apr 2017 13:53:54 +0000
Message-ID: <CAARdPYeQSbLiK4YATQAtZC-Zp9paQM9NvuVU5VHWB_P2VERXLg@mail.gmail.com>
To: James Graham <james@hoppipolla.co.uk>, public-test-infra@w3.org
On Wed, Apr 5, 2017 at 3:10 PM Philip Jägenstedt <foolip@google.com> wrote:

> On Mon, Apr 3, 2017 at 9:53 PM James Graham <james@hoppipolla.co.uk>
> wrote:
> On 29/03/17 16:08, Philip Jägenstedt wrote:
> > We need to figure this out for lots of specs now, and I think the
> approach
> > taken makes a lot of sense: specs simply define the APIs that are needed
> to
> > test them, it's not somebody else's problem.
> >
> > However, I would like to go a bit further and treat this more like we
> treat
> > any bit of API that tests define. Tests should simply assume that the
> APIs
> > exist, and otherwise fail. Having stubs of the APIs could make test
> > failures more explicit, but it seems like we could do without them. It
> > could be something like:
> >
> > async_test(t => {
> >   navigator.bluetooth.test.setLEAvailability(false);
> >   // and so on
> > });
> >
> > If lots of tests need the same setup, one can of course put that in a
> > shared bluetooth.js that fails more gracefully than the above one-liner.
> If there are clear specs for how these APIs should work that's probably
> OK. But it really has to be specced in just as much detail as any other
> part of the specification (perhaps more since it may be intrinsically
> difficult to test). I think that there's a serious danger here that we
> get tests which depend on some test-only API that has a single
> implementation and not enough detail in the specification and other
> vendors end up unable to make a compatible implemenation of the test API.
> Yes, these APIs would have to be defined in sufficient detail so that a
> second implementation does not need to reverse engineer the first. This
> would be true of WebDriver extensions defined by other specs as well, of
> course. (Putting everything into the WebDriver spec itself would be odd,
> especially for features in the early stages of the spec/implementation
> lifecycle.)
> In another context, Giovanni provided me with a list of all the features
> in Chromium that are currently using a mocking mechanism from JavaScript to
> write some of its tests:
>    1. Battery Status: mock-battery-monitor.js
>    <https://cs.chromium.org/chromium/src/third_party/WebKit/LayoutTests/battery-status/resources/mock-battery-monitor.js?dr=C>
>    e.g. updateBatteryStatus()
>    <https://cs.chromium.org/chromium/src/third_party/WebKit/LayoutTests/battery-status/resources/mock-battery-monitor.js?dr=C&l=29>
>    2. Bluetooth: web-bluetooth-test.js
>    <https://codereview.chromium.org/2737343003/diff/40001/third_party/WebKit/LayoutTests/resources/web-bluetooth-test.js>
>    e.g. setLEAvailability()
>    3. Budget: budget-service-mock.js
>    <https://cs.chromium.org/chromium/src/third_party/WebKit/LayoutTests/http/tests/budget/budget-service-mock.js>
>    e.g. addBuget()
>    <https://cs.chromium.org/chromium/src/third_party/WebKit/LayoutTests/http/tests/budget/budget-service-mock.js?l=48>
>    4. Geolocation: geolocation-mock.js
>    <https://cs.chromium.org/chromium/src/third_party/WebKit/LayoutTests/geolocation-api/resources/geolocation-mock.js>
>    e.g. setGeolocationPosition()
>    <https://cs.chromium.org/chromium/src/third_party/WebKit/LayoutTests/geolocation-api/resources/geolocation-mock.js?l=102>
>    5. ImageCapture: mock-imagecapture.js
>    <https://cs.chromium.org/chromium/src/third_party/WebKit/LayoutTests/imagecapture/resources/mock-imagecapture.js>
>    e.g. capabilities()
>    <https://cs.chromium.org/chromium/src/third_party/WebKit/LayoutTests/imagecapture/resources/mock-imagecapture.js?dr=C&l=53>
>    6. Is App Installed: installedapp-test-helper.js
>    <https://codereview.chromium.org/2671683002/diff/160001/third_party/WebKit/LayoutTests/installedapp/resources/installedapp-test-helper.js>
>    e.g. pushExpectedCall()
>    7. Media Session: mediasessionservice-mock.js
>    <https://cs.chromium.org/chromium/src/third_party/WebKit/LayoutTests/media/mediasession/mojo/resources/mediasessionservice-mock.js>
>    e.g. getClient()
>    <https://cs.chromium.org/chromium/src/third_party/WebKit/LayoutTests/media/mediasession/mojo/resources/mediasessionservice-mock.js?l=105>
>    8. NFC: nfc-helpers.js
>    <https://cs.chromium.org/chromium/src/third_party/WebKit/LayoutTests/nfc/resources/nfc-helpers.js>
>    e.g. pushedMessage()
>    <https://cs.chromium.org/chromium/src/third_party/WebKit/LayoutTests/nfc/resources/nfc-helpers.js?dr=C&l=354>
>    9. Payments: payment-request-mock.js
>    <https://cs.chromium.org/chromium/src/third_party/WebKit/LayoutTests/payments/resources/payment-request-mock.js>
>    e.g. onPaymentResponse()
>    <https://cs.chromium.org/chromium/src/third_party/WebKit/LayoutTests/payments/resources/payment-request-mock.js?l=43>
>    10. Presentation: presentation-service-mock.js
>    <https://cs.chromium.org/chromium/src/third_party/WebKit/LayoutTests/presentation/resources/presentation-service-mock.js>
>    e.g. onReceiverConnectionAvailable()
>    <https://cs.chromium.org/chromium/src/third_party/WebKit/LayoutTests/presentation/resources/presentation-service-mock.js?l=70>
>    11. Sensors: sensor-helpers.js
>    <https://cs.chromium.org/chromium/src/third_party/WebKit/LayoutTests/sensor/resources/sensor-helpers.js>
>    e.g. setGetSensorShouldFail()
>    <https://cs.chromium.org/chromium/src/third_party/WebKit/LayoutTests/sensor/resources/sensor-helpers.js?dr=C&l=331>
>    12. Shape Detection: mock-facedetection.js
>    <https://cs.chromium.org/chromium/src/third_party/WebKit/LayoutTests/shapedetection/resources/mock-facedetection.js>
>    e.g. getMaxDetectedFaces()
>    <https://cs.chromium.org/chromium/src/third_party/WebKit/LayoutTests/shapedetection/resources/mock-facedetection.js?l=30>
>    13. VR: mock-vr-service.js
>    <https://cs.chromium.org/chromium/src/third_party/WebKit/LayoutTests/vr/resources/mock-vr-service.js>
>    e.g. addVRDisplay()
>    <https://cs.chromium.org/chromium/src/third_party/WebKit/LayoutTests/vr/resources/mock-vr-service.js?dr=C&l=124>
>    14. WebShare: mock-share-service.js
>    <https://cs.chromium.org/chromium/src/third_party/WebKit/LayoutTests/webshare/resources/mock-share-service.js>
>    e.g. pushShareResult()
>    <https://cs.chromium.org/chromium/src/third_party/WebKit/LayoutTests/webshare/resources/mock-share-service.js?dr=C&l=52>
>    15. USB: usb-helpers.js
>    <https://cs.chromium.org/chromium/src/third_party/WebKit/LayoutTests/usb/resources/usb-helpers.js?type=cs>
>    e.g. addMockDevice()
>    <https://cs.chromium.org/chromium/src/third_party/WebKit/LayoutTests/usb/resources/usb-helpers.js?type=cs&l=341>
> I was rather surprised by the length of list, and can't take credit for
> any of it. For at least Bluetooth and USB, there are some ideas for what a
> testing API would look like. There is a risk that those APIs would look too
> much like Chromium's underlying abstractions, but as Giovanni mentioned
> they were deliberate about avoiding that as much as possible for Bluetooth.
> What would WebDriver extensions look like for, say, Bluetooth? There are
> many methods in the proposed API
> <https://docs.google.com/document/d/1Nhv_oVDCodd1pEH_jj9k8gF4rPGb_84VYaZ9IG8M_WY/edit?usp=sharing>,
> so presumably about as many new commands or subcommands, and arguments and
> return values would be serialized as JSON. I assume that many of the
> (instances of) interfaces would turn into something like web elements
> <https://www.w3.org/TR/webdriver/#dfn-web-element>, that show up as
> /{element id}/ in the command URLs. I'm not familiar with how this works,
> is it like URL.createObjectURL() so that it will prevent GC and keep
> objects alive?
> Finally, one would have to write a util library that wraps all this into a
> nice API that can be used to write tests. That would be part of
> web-platform-tests, and not shipped with the browser itself.
> Giovanni, you have previously estimated the binary size increase from
> various approaches, do you have any guesstimate for how WebDriver would
> compare?
> My concern is that at least for Chromium developers, the overhead of
> learning all about WebDriver and representing their APIs within those
> constraints is going to be significant, a task to be dreaded if it's
> required.
> Is there any new information we could bring to this discussion? For
> Bluetooth I assume that the plans and wishes of other implementers would be
> helpful to know, but more generally?
> For that reason, for the "easy" case here (creating artifical user
> input), I really want us to start with a cross-browser implementation
> based on the WebDriver standard. For cases like bluetooth that don't
> already have an equivalent cross-browser standard test API we should
> strongly consider which parts it makes sense to expose to web developers
> and put those in the WebDriver standard. For the remaining parts great
> care is required to ensure that the test API isn't an afterthought but
> treated with the same care, and level of vendor buy-in, as the API it's
> trying to test.
> For the things that WebDriver already supports or could trivially support,
> I agree with this approach. There's already a command for clicking the
> middle of an element <https://www.w3.org/TR/webdriver/#element-click>, so
> using that it should be possible to automate at least the "needs user
> interaction" tests.

There's a Chromium Architecture for exposing web-platform testing APIs
thread that is relevant. My conclusion is that we shouldn't declare neither
WebDriver nor testing APIs as *the* way to automate things going forward,
because the trade-offs may look very different for different APIs.

The process I would propose is this. For every spec/feature that requires
some kind of automation, using WebDriver or otherwise, the vendor who wants
to write those tests first send a message to public-test-infra detailing
how they'd like to test it, and what other vendors would have to implement
to automate the tests. If other vendors are silent, positive or we reach
consensus after some back and forth, then testing moves forward.

If there is skepticism to the point that no vendor explicitly welcomes the
tests and some vendor would rather not see them in web-platform-tests at
all, then that feature will remain untested by web-platform-tests. This
would be bad, but it's only if 2+ vendors actually want to run the tests
that shared tests are valuable.

Does that sound OK in the abstract?
Received on Tuesday, 11 April 2017 13:54:43 UTC

This archive was generated by hypermail 2.4.0 : Friday, 17 January 2020 17:34:13 UTC