Minutes from Media Timed Events Task Force call, 17 December 2018

Dear all,

The minutes from Monday's Media Timed Events Task Force call are available [1], and copied below.

As the use case and requirements document [2] has been prepared by a small number of participants in the Task Force, we invite wider feedback from Interest Group members, to ensure that the document represents your views. Please leave comments in GitHub [3].

Discussion is now starting regarding incubation of the DataCue API in WICG. I will let you know when there is more news.

Kind regards,

Chris (Co-chair, W3C Media & Entertainment Interest Group)

[1] https://www.w3.org/2018/12/17-me-minutes.html
[2] https://w3c.github.io/me-media-timed-events
[3] https://github.com/w3c/me-media-timed-events/issues/

--

W3C
- DRAFT -

Media and Entertainment IG - Media Timed Events TF

19 Dec 2018

Agenda
    https://lists.w3.org/Archives/Public/public-web-and-tv/2018Dec/0014.html

Attendees

Present
    Kaz_Ashimura, Rob_Smith, Chris_Needham, Steve_Morris, Will_Law, Giri_Mandyam, Mark_Vickers, Nigel_Megitt, Ali_C_Begen, Song_Xu

Regrets

Chair
    Giri, Chris

Scribe
    cpn, kaz

Contents

Topics
    Status of publishing the IG Note
    WICG
    Review open PRs
    Other topics for 2019
    Collaboration with ePub community
    Time marches on
    Frame-accurate seeking

Summary of Action Items
Summary of Resolutions

<cpn> scribenick: cpn

# Status of publishing the IG Note

Giri: Publishing as a snapshot in time. We have this as an action item from the last call. Any update from the IG co-chairs or staff?
.... Any recommendations for what the group needs to do to publish?

Kaz: We can record a resolution in the minutes of this call. If all the participants are OK, we can go ahead and publish as an IG Note.

Giri: Let's review the outstanding PR, if we approve that, then we can publish as it stands. We can revise later. Any objections?

[none]

Mark: Go ahead and publish, we can always update it.

Kaz: We don't have concrete feedback so far from Japanese members, e.g., Hybridcast. Maybe we can publish the initial version, then stakeholders can give their feedback to the published version.

Chris: Sounds good. I think the document will benefit from wider review by IG members.

<scribe> ACTION: Giri to issue the CfC to publish as IG note

<kaz> scribenick: kaz

# WICG

Chris: It's taken a while, but I have now been in contact with Marcos Caceres, one of the co-Chairs of WICG.
.... His advice was to create an explainer document, and he would circulate with his colleagues at Mozilla.
.... In the email thread with Marcos, I'm also in contact with Eric at Apple, so I think we have good implementer support.
.... I have created an initial draft of an explainer.

<cpn> https://github.com/chrisn/datacue/blob/master/explainer.md <- initial draft explainer

Chris: This is based on the discussion thread in WICG Discourse. I plan to add some more context and detail from the use case and requirements document.
.... But the scope is slightly different. The explainer is about the DataCue API, whereas the use case and requirements document is broader, about synchronization and synchronized rendering.
.... So the DataCue work is one of the outputs that could come from this TF. Other outputs we could have, such as the time marches on algorithm in HTML..
.... This isn't a new specification, but a change to that part of HTML. That's a WHATWG thing, so would be treated separately.
.... I'm pleased this is moving forward. There's additional work to be done, which is to add detail and be more specific in the explainer.
.... Because this is WICG, we can do things the IG can't do, e.g., look at the API shape in more detail. A typical explainer might contain example code to show how the API would be used.

Giri: Isn't that putting the cart before the horse? We'd want to get agreement in WICG on the API surface before talking about code.

Chris: Yes, I'm just saying that explainer documents often contain example appliction code to show how the API works.
.... It can be in draft form, to give an indication to someone who's not familiar with the material about the problem we're trying to solve.
.... The explainer document would then go, for example, to TAG review, and they need us to explain the context.
.... What I'd hope is that this document is something we do collaboratively with the browser companies.
.... At this point, it'll be them driving the work much more, as the purpose of the incubation is to get the implementation feedback.
.... As a next step, we need to open up a dialogue with representatives with the browser companies to figure out who'll be working on this.
.... We'll need a spec editor, for example. This discussion is starting offline.
.... We haven't spoken to Google or Microsoft recently, are there people there who would want to be active in this work.

Giri: Me and you and Eric have already volunteered to shepherd this, so you want to bring in people from Google, Microsoft, and Mozilla.

Chris: Yes

<cpn> scribenick: cpn

Giri: I updated DASH-IF on our TF progress at their F2F meeting 2 weeks ago..
.... There's a good understanding that this would to apply to both the Safari HLS and DASH use cases. DASH-IF will continue their work around eventing, but will want to take advantage of this work eventually.

Rob: We had 3 people from Google in the Spatial Data on the Web IG meeting at TPAC, I can suggest some contacts at Google.

Chris: Thank you. On the media side, there's also Mounir.

Rob: Joe Medley is a tech writer, also Jon Pallett had some good feedback on WebVMT.

# Review open PRs

Giri: We have one outstanding PR. This is about restructuring the use cases, Chris has done quite a bit. Are you satisfied with it?
.... I had a look, didn't see any problems.

<kaz> https://github.com/w3c/me-media-timed-events/pull/23 PR 23

Chris: Yes, I'm happy. I wanted a representative set of use cases, and a paragraph to describe each one. I'd like to invite wider feedback for review.

Giri: I suggest we use the publication of the document to invite review, so I suggest we merge now, then I can do the CfC.
.... Call for consensus: Are there any objections to publication, after this change is merged, as an IG note?

Nigel: A procedural question. Will you allow some time for people to check the results of that merge before publishing?

Giri: This isn't a standards track document. Kaz said he's had trouble getting feedback from the east Asian participants, so the CfC is away of triggering that.
.... Simply merging the PR probably won't be enough to get the wider feedback. If you'd like a few days after merging, that's fine.
.... But we want to publish before the end of the year.

Nigel: I think that would be good. If you're seeking comments, it's worth thinking about how you'll reach out to people.
.... The SOTD in the document has an email address for comments. Can people raise issues against the repo?

Giri: I think we'd want to point people to the GitHub issue tracker rather than the mailing list.

Nigel: Another thing is having a link to the repo at the top of the document.

Giri: Could you please file an issue to capture that?

Nigel: Yes

<nigel> https://github.com/w3c/me-media-timed-events/issues/28 Invite comments by filing issues #28

Giri: Chris, can you make that change when you merge?

Chris: Yes, that's fine.

[discussion of using email or GitHub for the CfC]

Giri: If we do the CfC today, we can close this fairly quickly, so Thursday this week, then leave it with the IG chairs to work with staff to publish.
.... A quick note: Qualcomm is withdrawing from W3C, so I can't participate further from the end of the year.
.... I plan to be an individual contributor in the WICG.
.... Chris has been acting as a co-chair, thank you. Do you want a co-chair going forward?

<kaz> scribenick: kaz

Chris: Having a co-chair is really very helpful, good to have another point of view, and someone to work with. If someone would like to do this, I would be very grateful.

<cpn> scribenick: cpn

Mark: I believe we want to shut down the TF and move to WICG. So it's only a matter of a few weeks. A question for staff, can Giri continue to join calls while we close the TF?

Kaz: We can invite Giri as a public observer, but I think we should publish the draft during the period of Giri's participation, before the end of the year.
.... As the first publication of this note, we need W3M approval. I can get that quickly, when we have group consensus. We can do that during next week..

Mark: So the Task Force won't be finished at the end of year, as there'll be some revisions. Is it OK to invite Giri to do those revisions, for a couple of weeks into January?
.... It would help from a continuity point of view.

<Zakim> kaz, you wanted to show an example of Note: https://www.w3.org/TR/2018/NOTE-wot-security-20181203/#toc

Kaz: I shared a WoT group note, to look at the info at the top of the document.

<nigel> Thank you!

# Other topics for 2019

Giri: Open discussion, if there are offshoot topics.

# Collaboration with ePub community

Giri: A topic in CTA WAVE and DASH-IF, actually discussed at TPAC 2017, was representation from the ePub community, there was some overlap.
.... Somehow that never really materialised in the TF, were there things not addressed, specific to that community?

Chris: I'd like to follow up with them.
.... Another topic is the frame-accurate seeking discussion in M&E IG, which went on to discuss frame accurate rendering of overlaid information. I think the IG should come back to this.
.... https://github.com/w3c/media-and-entertainment/issues/4
.... How much interest is there among IG members to continue with that topic?

Kaz: Possible collaboration between the media group and the publishing group?

<kaz> https://www.w3.org/publishing/events/tokyo18-workshop/minutes.html digital publication layout workshop minutes

Kaz: There was a workshop on digital publication layout in September in Tokyo. The main topic was manga / comics.
.... Some workshop participants were interested in moving animation comics, kind of like video streams embedded in a comic site/page.
.... They're interested in how to integrate video streams and speech recognition and speech synthesis into ePub applications.
.... Their preferred approach is similar to the M&E approach in the end. I mentioned we should collaborate a bit more.

Giri: It sounds like there's opportunity to collaborate with the ePub efforts at W3C, publish even broader use cases, so should be taken up at the IG level.
.... If the end goal is to feed into pre-standards work, it's beneficial to have multiple communities working together.

Chris: I agree. How to start that conversation, who should we talk to?

Kaz: Can discuss with the co-chairs Tzviya and Wendy, and Ivan Herman.

# Time marches on

<Zakim> nigel, you wanted to mention a specific lower cue timing threshold

Nigel: We've talked a few times about the time marches on algorithm and cue firing accuracy, as opposed to the precision of the timestamp on the cue.
.... This is also being discussed in other forums. I'd like to put forward a number: 20ms.
.... This is coming from another group, who think that's appropriate. Currently it's 250ms.
.... Certainly, based on my limited testing, I found Firefox fires easily within 20ms, but Chrome and Safari do not.
.... The derivation of that figure is that it's half the duration of a frame at 25 fps.
.... So if you're at 25 fps and you fire the event within +/- 20ms, you'll be frame accurate.
.... Obviously, higher frame rates do exist, but from a human perception point of view, if you use this to drive caption timing or audio timing, it's small enough that most humans won't be able to identify shorter time periods..
.... This is a tentative proposal, something to consider as a figure. I say that in the interest of trying to work towards long term alignment. I don't want W3C to come to one figure, and other standards bodies have another.
.... I say this in the interest of working towards long term alignment. I wouldn't want other SDOs to have a different figure than W3C.

Mark: This came up at TPAC, the suggestion was to send a pull request to the WHATWG with a spec change, including your motivation,
.... rather than anything this group would do, or working through WICG.

Nigel: Yes, there was an issue about how we contribute to WHATWG.

Chris: I wrote about that here: https://github.com/w3c/media-and-entertainment/issues/11.

Nigel: I agree with Mark, that would be a good way to move things forward.

Chris: Do we need this forum, not to do the spec work, but more as a coordination point?
.... Not just for time marches on, but also with the WICG work.

# Frame-accurate seeking

Rob: Coming back to the frame accurate seeking issue, this chimes with a fairly important set of use cases for WebVMT.
.... I've just raised two of these with the Open Geospatial Consortium (OGC)..
.... The use case I originally suggested to this group, which was discounted as you were more interested in the rendering side.
.... I've raised ideas in OGC on search by location in a video archive, could be used for drone maintenance in a remote location,
.... e.g., turbines in a wind farm, search video history for frames to show if there's any damaged turbines.
.... Also, searching by metadata, for the insurance industry, searching acceleration metadata for a particular signature which indicates a vehicle impact, and recover a video clip at that time from the dash-cam footage.
.... It sounds like the frame accurate seeking could well be related.

Chris: The general topic of media synchronization is of broader interest than the event triggering aspect. I suggest taking this back to the IG in general, do something more focused on frame accurate seeking. If that generates interest, could be the next stage of work for the TF.
.... My proposal would be to take back to the IG. If there's sufficient interest, we can reframe the work of the Task Force for the next stage.

Rob: There's wider interest than just for broadcast. Could be done in the spatial data on the web IG, but important to have the collaboration.

Song: I'm new to the TF. Regarding figures for frame accurate seeking, decreasing from 250ms to 20ms. I'm doing uplink and downlink for UHD transmission over 5G, currently the network latency is 200ms.
.... If we work on the figures, like 20ms, but if network latency in real environments is more like 100ms, it will affect the user experience for the accuracy. This could be of interest.

Nigel: Regarding time marches on, and the accuracy relative to the cue timestamp, is totally separate from frame accurate seeking.
.... Frame accurate seeking is about the amount of control you have when seeking the video to a particular location.
.... The time marches on cue firing decision is about how close to the requested time the cue onenter and onexit events will fire
.... These are separate things, we shouldn't mix them up.

Giri: Thank you everyone for working in the TF this year. There are some action items, as well as areas to explore for 2019.
.... Thank you.

Mark: Giri, on behalf of the IG, I'd like to thank you for the exemplary work you've done chairing this group.

<nigel> +1 to Mark

<kaz> [adjourned]

Summary of Action Items
[NEW] ACTION: Giri to issue the CfC to publish as IG note

Summary of Resolutions
[End of minutes]

Minutes formatted by David Booth's scribe.perl version 1.152 (CVS log)
$Date: 2018/12/19 06:36:49 $


-----------------------------
http://www.bbc.co.uk
This e-mail (and any attachments) is confidential and
may contain personal views which are not the views of the BBC unless specifically stated.
If you have received it in
error, please delete it from your system.
Do not use, copy or disclose the
information in any way nor act in reliance on it and notify the sender
immediately.
Please note that the BBC monitors e-mails
sent or received.
Further communication will signify your consent to
this.
-----------------------------

Received on Wednesday, 19 December 2018 13:23:17 UTC