Minutes from W3C M&E IG call: Web Media API Snapshot 2017 Test Suite

Dear all,

The minutes from the last Interest Group call on Tuesday 4th September are available [1], and copied below. Many thanks to Louay for presenting. The slide deck is here [2], and the test suite itself is here [3].

Our next call is Tuesday 2nd October, where we're planning to discuss WebRTC for live media streaming. Details to be announced nearer the time.

Kind regards,

Chris (Co-chair, W3C Media & Entertainment Interest Group)

[1] https://www.w3.org/2018/09/04-me-minutes.html
[2] https://www.w3.org/2011/webtv/wiki/images/4/40/WEB-MEDIA-API-TEST-SUITE-Fraunhofer-FOKUS.pdf
[3] https://github.com/cta-wave/WMAS2017

---

W3C
- DRAFT -

Media and Entertainment IG

04 Sep 2018

Agenda

Attendees
 Present
  Kaz_Ashimura, Chris_Needham, John_Luther, Louay_Bassbouss, Masaru_Takechi, Tatsuya_Igarashi, Francois_Daoust, David_Hassoun, Steve_Morris
Regrets

Chair
 Chris, Mark, Igarashi

Scribe
 cpn

Contents

Topics
 Introduction to CTA WAVE
 Web Media API Snapshot 2017 Test Suite
 Test Runner for Embedded Devices
 Test Runner Companion Page
 Test Reports for 4 Desktop Browsers and 3 Embedded devices
 Test Results Filtering and Comparison
 Gap Reports for Web Media APIs
 Discussion and Q&A

Summary of Action Items

Summary of Resolutions

<kaz> scribenick: cpn

Introduction to CTA WAVE

presentation slides: https://www.w3.org/2011/webtv/wiki/images/4/40/WEB-MEDIA-API-TEST-SUITE-Fraunhofer-FOKUS.pdf

Mark: Welcome everyone. This is a talk about the Web Media test suite that is now being made available.
.... Louay Bassbouss from Fraunhofer Fokus will be presenting.
.... CTA is the Consumer Technology Association, which is interested in the business side and standards side of the technology.
.... (Slide 3) [presents WAVE membership list]
.... There's a big overlap with companies at W3C.
.... A lot of the main players on the web are involved.
.... We're looking at how to have better interaction between consumer electronics devices, running applications that that present media services such as OTT media services, or broadcast channels with a web presence.
.... If you write a web application, there are a lot of CE products today, such as smart TVs and retail set-top boxes that have a web environment.
.... But it's hard to write a single web app that runs across all of them.
.... The WAVE project is addressing this through a few approaches.
.... (Slide 4) Content formats, avoiding having to store in multiple formats,
.... leveraging the latest MPEG CMAF standard. We talked about that on a previous call.

https://www.w3.org/2018/06/05-me-minutes.html

Mark: (Slide 5) Then there's Device Playback, looking at the characteristics of how devices play back content. Issues such as glitches on switching bitrates, or display scaling, audio discontinuities, HDR support, content splicing.
.... (Slide 6) The final area is the different operating systems, but we want to test that these have the same capabilities.
.... We start with the same HTML5 based platform, common with modern PC and mobile browsers. These tend to be much more up to date than CE devices.
.... To test devices, we write a set of reference tests in HTML5, and then we can port them to other platforms.
.... We're making those available for free to the industry.
.... I'll hand over to Louay now to look at the details.

Web Media API Snapshot 2017 Test Suite

Louay: Thank you, Mark.
.... (Slide 8) [Presents the Web Media API 2017 Community Group report]

<kaz> Web Media API Snapshot 2017

Louay: This spec will be updated annually, as will the test suite, to cover new APIs.
.... The spec lists core media specs, MSE, EME, and network specifc APIs like XHR and Fetch.
.... Not all features are implemented in all UAs. These are excluded from the test suite.
.... (Slide 9) The test suite is open source on GitHub.

<kaz> test suite

Louay: It's forked from the Web Platform Tests repository.

<kaz> original Web Platform Tests site

Louay: The main changes we made from WPT is to support embedded devices, like TVs and set top boxes.
.... There are two main limitations: performance, and single window.
.... On a TV you can't open multple windows or tabs, so we had to deal with this limitation.
.... Other features we added include running multiple tests, with filters.
.... (Slide 10) There is a script to set up locally, which brings the tests in from WPT.

<kaz> test script

Louay: The script will retrieve everything. The WPT tests have no ECMAScript tests, as WPT is only for web APIs.
.... There is another test suite for ECMAScript.

<kaz> ECMAScript test suite

Louay: We didn't want to have two separate scripts, so we bring in the ECMAScript 5.1 tests and convert them into WPT compatible tests.
.... These get merged into the existing folder structure.
.... You can initialise everything locally.

Test Runner for Embedded Devices

Louay: (Slide 12) The test runner for embedded devices is set up on AWS.
.... The main change to the test runner is to support embedded devices. Such devices have only one window, so we extended the test runner.
.... (Slide 13) Part of the test runner logic runs on the server.
.... Each test runs in a session with an identifier token.
.... When a test run is completed, the results are reported to the server, which responds with a link to the next one.
.... If one test crashes the browser, you can restart everything in a new session or you can continue from where it crashed in the existing session.
.... Many tests are run locally, tests run in a new tab. This requires more resources.
.... An HTML test report is created.

<kaz> Test Suite portal

Igarashi: About running tests on the server, it's an interesting approach. Does the user need to operate the UI on the embedded device? Or is this controlled through the server?

Louay: You need to open the test suite landing page link on the TV, which runs the first test. If there are no crashes, you'll have the results in a few hours.

Igarashi: If you find an error, can you run a test individually? Can this be controlled from the server side?

Louay: I'll show this later, but you can monitor the tests running on a different page, and you can see the progress. The results shows whether each individual test passes or not.
.... When the test is running, you can see the report for an individual test immediately.

Igarashi: Does the user need to use a keyboard or mouse on the embedded device? What are the assumptions about UI on the embedded devices?

Louay: We kept this to a minimum, I'll come back to this.
.... The test suite has a companion page where you can monitor running the tests on the embedded device.

Test Runner Companion Page

Louay: (Slide 16) When you start a test, the TV will display a QR code. You can scan this to open the companion page on your mobile device to monitor the tests in real time.
.... If you only have a desktop browser, you can still monitor by opening the given URL and entering the token value.
.... (Slide 17) [shows Test Runner Companion Page]

<kaz> result page

<kaz> (shows the result table on the result page above)

Louay: The page shows the results and summary statistics.
.... For testing on multiple devices, you can start the test runner on each device. This creates a session per device.
.... You can open the test results page for a specific session.
.... You can also compare the results from different sessions.

Test Reports for 4 Desktop Browsers and 3 Embedded devices

Louay: (Slide 20) The test reports are presented in the same format as WPT.
.... [shows example comparison between 4 desktop browsers]
.... You can get the report for each API.
.... (Slide 21) For embedded devices, the columns show the session IDs.

Test Results Filtering and Comparison

Louay: (Slide 23) There are links to compare the results from embedded and desktop browsers.
.... For example, if you are an embedded device manufacturer using Chromium, it makes sense to compare the results from the embedded device with the results from the same browser on desktop.
.... This shows differences between the initial browser version and the version after integration.
.... You can filter the results with reference to the equivalent desktop browser.
.... In the results you can also filter against one existing session.
.... Suppose you're integrating a new browser codebase version, you can run the tests in this codebase and compare the results with the reference browser.

Gap Reports for Web Media APIs

Louay: (Slide 25) This is a large piece of work, e.g., for APIs like CSS and the core HTML spec.
.... What we did is to select most of the APIs listed in the Web Media API spec, and have a mapping between the section in the API spec to the tests available in the WPT repo.
.... [example of MSE tests]
.... Using this mapping, we can see if any tests are missing.
.... It uses a template provided by Francois.

<kaz> Gap report template

Louay: It's not completed, so contributions to this would be very welcome - especially for CSS where there are thousands of tests.

Discussion and Q&A

Louay: (Slide 27) I'd like to invite your questions and discussion.
.... I want to raise two points. Firstly, during this project we needed to clarify the meaning of the test status attributes in the test harness. The semantics for each status were unclear.

https://github.com/cta-wave/WMAS2017/#remarks-to-testharnessjs-statuses

Louay: it would be helpful to describe this in the WPT documentation.
.... Another point is whether there's interest from W3C in contributing the WAVE tests back to WPT?
.... Do you have other questions?

Mark: I'd just like to add something. The project has reached release stage.. The WAVE project has been working closely with W3C, helping to keep in line.
.... We've been working as a W3C CG. The Web Media API CG is essentially the WAVE project in W3C.
.... The WAVE steering committee has decided that the whole test suite should be made available to W3C, so we want to open a conversation on how to contribute this.

Igarashi: I'm very excited to hear about this contribution, I appreciate your efforts.
.... You mentioned that the test runner is running on AWS. Can anyone use it? It's a good way for us to provide feedback.

Louay: It's open to anyone, yes.
.... For now there's only one AWS instance, but we'd need to consider scalability.
.... For single tests, it's enough. The results will be deleted after 30 days.
.... If you want to use the test suite and send feedback, I recommend using the online version.
.... You can also set it up locally, so all information is stored on your own server.
.... With the online version, the test results are associated with a token.
.... You can share the results by sharing a link that includes the token.
.... Please send feedback using GitHub issues.

Mark: We expect there still may be bugs, so please do send feedback.
.... We have run on a few TVs, but not on all different brands. I would expect that as we run on more consumer products, we'd uncover other bugs.
.... Running on many more devices is the best thing to do now, to flush out bugs in the test suite.

<Zakim> kaz, you wanted to ask about the template (copyright, icon, etc.)

Kaz: This is very useful, and nicely organised. It's useful for W3C testing in in general, specifically for the ones on embedded devices.
.... I'll talk to Philippe Le Hégaret about how we can contribute back to WPT.
.... There is also a branding issue, about copyright and use of W3C logo. All the results on the CTA, maybe we should remove these.

<tidoust> [I suspect the logo and copyright were added by ReSpec, because that's what it does by default. This can be overwritten in ReSpec's settings]

Chris: Will there be a WPT meeting TPAC?

Louay: I think so, not sure

Chris: It would be good to open a direct conversion with people involved in WPT.

Igarashi: We could also include in the M&E IG meeting during TPAC.

Francois: I want to echo that... if we're interested in contributing back, then we should push for that.
.... As I understand it, you haven't changed the existing test runner, but created a new one. It's not a requirement to use the WPT test runner.
.... The test runner you've contributed could be run as a side project under WPT.
.... It would be interesting to list the various contributions we could make..
.... The test runner is one. Filtering is another, it's something that isn't in WPT.
.... Also coverage information, a tricky issue, but something that could be investigated further.
.... Are there other things?

Louay: We didn't make a lot of changes to WPT, rather extended it with a new runner. We didn't change the existing tests.
.... I think contributing shouldn't be complicated.
.... The WPT report tool has been extended to add filtering. This is easier to contribute, just add an extra command line option.

<Zakim> kaz, you wanted to mention that this approach would be useful for WoT, etc., as well

Kaz: I agree with Igarashi-san. This is useful, but also for other W3C WGs working on embedded devices. It would be great and useful for this IG and those other WGs to have a joint meeting on this topic.
.... For example, the WoT group is working on a similar effort.

Louay: I agree. It's useful in general for devices without a display.
.... Also for devices that can run JavaScript code but not necessarily open HTML pages.
.... The separate monitoring environment on desktop or mobile is useful.

Francois: Is contributing the results something you're willing to do, or to give to the community for someone?
.... I wonder if someone is able to drive the contribution. I wouldn't expect the WPT people to do the work.

Louay: We should discuss if it makes sense to contribute this back into the same repository or set up a different one.
.... What happens if you want to run the tests also on desktop or mobile? Which test runner to select?
.... It seems that WPT people are moving to WebDriver. This involves changes to the core of the test runner.

Francois: It's a good question. I guess the trend is to use WebDriver more and more. It's a problem for embedded devices if they don't support WebDriver.
.... It would be good if you can attend the meeting they'll have at TPAC, and add to their agenda.

Louay: Myself or Stephan will try to addend.
.... It would also be good to present the outcome from this call to WAVE, so they can discuss how to proceed.

Mark: Ideally we would have had a discussion on this call. We do need to have a discussion with WPT people, could be at TPAC or we could setup another dedicated call.
.... Also, regarding WebDriver. We discussed whether to make this one of the required APIs in the Web Media API spec.
.... There was concern expressed on how to use WebDriver in a consumer electronics product such that it doesn't become a security hole in the shipping product.
.... We didn't know how to address that, so we excluded it.
.... The API spec will be updated each year. If we can work out the security concerns, we could include it in future.

Louay: It would be easier going forward to use WebDriver.

Mark: Maybe the security concerns around WebDriver could be a good topic to discuss with the WPT people when we meet.

Kaz: I have started discussing with Philippe Le Hégaret about embedded device testing, we can discuss with the Browser Testing and Tools WG. David Burns is the Chair of that group.

Mark: Thank you very much Kaz for following that up.

<scribe> ACTION: Kaz to email PLH, IG Chairs, and Louay to follow up.

Francois: As part of that, we should include a short list of precisely what we can contribute, and what are questions or discussion topics are.

Louay: Yes, I'll do that.

Chris: Thank you Louay, both for this great work you've done and also for presenting today. It's a really valuable contribution to the industry.

[adjourned]

Summary of Action Items
[NEW] ACTION: Kaz to email PLH, IG Chairs, and Louay to follow up.
 
Summary of Resolutions
[End of minutes]
Minutes formatted by David Booth's scribe.perl version 1.152 (CVS log)
$Date: 2018/09/10 10:40:35 $


-----------------------------
http://www.bbc.co.uk
This e-mail (and any attachments) is confidential and 
may contain personal views which are not the views of the BBC unless specifically stated.
If you have received it in 
error, please delete it from your system.
Do not use, copy or disclose the 
information in any way nor act in reliance on it and notify the sender 
immediately.
Please note that the BBC monitors e-mails 
sent or received.
Further communication will signify your consent to 
this.
-----------------------------

Received on Monday, 10 September 2018 13:07:40 UTC