Minutes from W3C M&E IG monthly call 3 March 2020

Dear all,

The minutes from the Media & Entertainment Interest Group call on Tuesday 3rd March are now available [1], and copied below.

The slides from the Media Timed Events TF update [2] and Bullet Chatting architecture presentation [3] are also available.

The new media production use cases repository is here [4].

We'll announce a date for the Bullet Chatting follow up discussion soon.

Kind regards,

Chris (Co-chair, W3C Media & Entertainment Interest Group)

[1] https://www.w3.org/2020/03/03-me-minutes.html
[2] https://docs.google.com/presentation/d/1nWEG-LNZiNt0EQV91AryzmolJ-KBJYKJdgNatCMNIy8/edit
[3] https://www.w3.org/2011/webtv/wiki/images/c/c6/Bullet_Chatting_TF_03_03_2020.pdf
[4] https://github.com/w3c/me-media-production

--

W3C
- DRAFT -
Media and Entertainment IG
05 Mar 2020
Agenda

Attendees

Present
    Kaz_Ashimura, Chris_Needham, Larry_Zhao, Peipei_Guo, Takio_Yamaoka, Yajun_Chen, Tatsuya_Igarashi, Gary_Katsevman, Garrett_Singer, Huaqi_Shan, Steve_Morris, Nigel_Megitt, Pierre_Lemieux

Regrets

Chair
    Chris, Igarashi, Pierre

Scribe
    kaz, cpn

Contents
Topics

Introduction
Media production use cases
Media Timed Events TF
Media production use cases
WoT follow-up
Bullet chatting
Summary of Action Items
Summary of Resolutions

<kaz> scribenick: kaz

# Introduction

Chris: We have a few topics today:
.... update and next step for media production use cases,
.... an update from the Media Timed Events TF,
.... following up on last month's Web of Things call,
.... and Bullet Chatting TF.
.... AOB?

(none)

# Media production use cases

Chris: There are some comments from Nigel on Garrett's pull request

<cpn> https://github.com/w3c/media-and-entertainment/pull/33

Chris: We discussed whether or not we should create a separate repo,
.... so that we can gather related documents in one place.
.... So we have created a repo to use

<cpn> https://github.com/w3c/me-media-production

Chris: It's empty at the moment though,
.... and we would like to move the initial document to that repo.
.... In doing that we should also take in account Nigel's comments.

Garrett: That's a good idea

Nigel: We should transfer the existing issues/PRs to the new repo.

Chris: I will look into how to do that.

Garrett: There several issues there, the main one is about describing the use cases.

Nigel: It's a good time to solicit questions, and write some clear motiviating use case descriptions.
.... We should explain the outcomes we want to achieve, the use cases,
.... and then any resulting gaps, and prioritise them in order of importance..

https://github.com/w3c/media-and-entertainment/issues/30 Issue 30

https://github.com/w3c/media-and-entertainment/pull/33 PR 33

https://github.com/w3c/media-and-entertainment/pull/33#pullrequestreview-346443087 Nigel's comments

Chris: I agree. Once we have this in the new repo we can add to it, show it to people, and I hope gather further contribution.
.... If Pierre can join later, I'll ask him for an update.

# Media Timed Events TF

Chris: This is an update on the TF work.

<cpn> https://docs.google.com/presentation/d/1nWEG-LNZiNt0EQV91AryzmolJ-KBJYKJdgNatCMNIy8/edit

Chris: To recap,we had a goal to improve support for timed events related to audio/video media on the Web.
.... This extends the existing TextTrackCue support to general data cues.
.... There was some media industry interest, around inband events.
.... We identified some gaps, including timing accuracy of rendering content synchronization.
.... The main outcomes so far: We published an IG note with use cases and gap analysis.

https://www.w3.org/TR/media-timed-events/ Note

Chris: Also started WICG DataCue incubation following TPAC 2018.
.... Current status: I'd like to complete the next revision of the IG Notes.
.... Thanks to everyone who provided feedback, I have made a new draft.
.... I would like to get further review before we finalize it.
.... I'll follow up with people offline, and share with the IG.

https://w3c.github.io/me-media-timed-events/ new draft

Chris: We have two proposed changes to the HTML spec.
.... One is changing TextTrackCue end time representing end of media: https://github.com/whatwg/html/issues/5297
.... use value of "Infinite".
.... The second proposal is for text track cue event timing accuracy: https://github.com/whatwg/html/issues/5306
.... I'm working on this now. Initial response from Philip Jagenstedt was positive, so I'll draft some wording for review.
.... There's also implementation work by Chromium team happening now.
.... Issue 576310: https://bugs.chromium.org/p/chromium/issues/detail?id=576310
.... Issue 1050854: https://bugs.chromium.org/p/chromium/issues/detail?id=1050854
.... Two issues there: one is for timing accuracy, the other relates to some events not firing when they should be.
.... As next steps, I'm trying to restart the activity on DataCue.
.... I have been talking to the people from DASH-IF. They have a parallel activity on DASH eventing.
.... They have calls every 2 weeks on Friday afternoon and have invited people from W3C to join.
.... I myself joined 2 weeks ago.
.... If you would like to join it, you're welcome, please contact me for the details.
.... (next call on Friday, March 6)
.... The next Media Timed Events TF is call on March 16
.... I'm also having discussions with some of the implementers, as we need to get their engagement
.... and bring them into the activity.
.... I'll have more to report on March 16th, and we can discuss further work for the TF.
.... For example, there was discussion during the Media WG meeting at TPAC
.... about developing cue rendering pipeline idea.

https://www.w3.org/2019/09/19-mediawg-minutes.html#item01 Media WG discussion during TPAC

Chris: Basically, from MEIG's point of view we want to hand over the work to the WICG and close the TF if there's no more to do.
.... But in WICG we need the relevant people around the table to make progress.
.... Any questions/comments?

(none)

(Pierre joins)

Media production use cases
Chris: We talked about media production briefly,
.... I mentioned creating a new repo and migrating the resources to it.
.... Do you have anything to add about conversations you've had?

Pierre: I'm trying to involve relevant people, setting up a separate call,
.... but so far not have musch success.
.... Nothing to report yet.

Chris: OK. I should also mention the video editing WICG topic raised by Microsoft.
.... Perhaps we can invite them in to talk about it.

https://github.com/WICG/video-editing

# WoT follow-up

<cpn> scribenick: cpn

<kaz> https://github.com/w3c/wot-architecture/tree/master/USE-CASES WoT use cases

<kaz> https://github.com/w3c/wot-architecture/issues WoT architecture issues

<kaz> https://github.com/w3c/wot-thing-description/issues WoT thing description issues

Kaz: There are three pieces:
.... 1. the WoT WG are generating use cases on GitHub
.... 2. and there are some open issues
.... 3. also there's an ongoing parallel discussion with the Singaporean government about surveillance cameras support
.... but this should be separate from the M&E video streaming use cases.
.... As someone who's across both groups, I can generate some initial use cases, then invite MEIG people to the WoT use case discussion next week

Chris: I can help you prepare that.

Kaz: We can share some initial draft as basis for initial discussion.

Igarashi: What is the agenda for the next joint call?

Kaz: The suggestion wasn't to have another joint call, but invite MEIG people to the WoT architecture discussion.
.... We can have another joint call later.

Chris: We can draft something to share with MEIG and invite contribution.
.... Also reach out to people not on the call today.
.... I think we need more input from our members, to bring use cases that are relevant.

Kaz: If you're interested, please join the WoT architecture call next week.

Chris: I can do that.

Igarashi: Is the expected input from MEIG or individual members?

Kaz: Individual members is OK, all are welcome.

Chris: The Web & TV IG produced home network requirements, may be useful to review.

https://www.w3.org/TR/hnreq/ Requirements for Home Networking Scenarios

<kaz> Kaz: The WoT architecture has two calls on Thursday: 1. 2am EST, 7am GMT, 8am CET, 4pm JST and 2. 11am EST, 4pm GMT, 5pm CET, 1am+1d JST

Chris: We can review which are still relevant, but need people from those organisations to contribute to the work ideally.

<kaz> https://www.w3.org/WoT/IG/wiki/WG_WoT_Architecture_WebConf WoT Architecture wiki (including the webex info)

<kaz> Kaz: The WoT WG are looking at home network use cases at the moment.

<kaz> scribenick: kaz

Chris: Will give a progress update on the mailing list, please participate!

# Bullet chatting

Chris: One of the outcomes from our last discussion was to look at the technical architecture.

https://www.w3.org/2011/webtv/wiki/images/c/c6/Bullet_Chatting_TF_03_03_2020.pdf Bullet Chatting Logical Architecture and Data Flow Diagam

Huaqi: [Slide 2] We've worked on the architecture diagram and data flow chart.
.... [Slide 3] Here's the bullet chatting local architecture diagram.
.... There are two main modules in the client, the Bullet Chatting Data Module and the Bullet Chatting Rendering Module.
.... On initialisation, the client loads a bullet chatting data format file from the server.
.... The file includes colour, font size, image, animation formatting data. It also includes the bullet chatting data.
.... When running, the client will also get incremental bullet chatting data via a websocket or polling.
.... The rendering module needs to support three scenarios: communication, video on demand, live streaming, and non-video.
.... In the video on demand scenario we also need to support synchronisation to the timeline.
.... When the user inputs the bullet chatting, the client sends the data to the server.

Nigel: There's synchronisation with the timeline in video on demand. Is this also the case for live?

<xfq> See also: http://assets.processon.com/chart_image/5e4f97dbe4b0c037b5f82fc5.png

Nigel: Can you pause and rewind the live streaming?

Huaqi: There's no synchronisation with live streaming.

Chris: I guess with a live chat, it's not possible to synchronise to give the same experience to all viewers
.... because different people may experience different latencies in the video streaming.

Nigel: I'm thinking of the case where there's a live rewind window, in the HLS or DASH manifest, so you can go backwards.
.... In which case you would want to synchronise to the media, even though the chat is delivered live.

Chris: From the video on demand bullet chat examples I've seen, it does that, not sure about live.

Kaz: So the expectated service here is rather recorded video, where the users can add bullet chat annotations on the video, like YouTube video?

Huaqi: Yes.
.... [Slide 4] In the next data flow chart, rendering bullet chatting in file.
.... On the client side, after initialization the client will query the bullet chatting data file from the server,
.... the server obtains this from the database and returns it to the client.
.... If there is data in the file, the client will render the bullet chat.
.... [Slide 5] Next, sending bullet chatting.
.... On the client side, after initialisation, the user will input bullet chatting data.
.... The client does some local filtering for sensitive words. If it's ok it's sent to the server.
.... The server also applies filtering, and if it's OK the data is inserted in the database.
.... Then, we broadcast the bullet chatting data to all clients.
.... [Slide 6] Lastly, rendering incremental bullet chatting.
.... When running, the client will receive incremental bullet chatting data via websocket or polling.
.... The server gets bullet chatting data from the client, inserts it in the database, and broadcasts it to all clients.
.... So the clients receive all the bullet chatting data, and if there is data will render it.
.... Any questions on these data flows?

Chris: Thank you for preparing the architecture diagrams, this is very helpful.

<xfq> See also: http://assets.processon.com/chart_image/5e4fa83ce4b0362764fb5c58.png

Pierre: On the first slide, which shows the browser,
.... in the bullet chatting local architecture diagram,
.... who provides the bullet chatting data module? Where it comes from?

Huaqi: It's provided by the application, which could be a native app or web app.

Pierre: We're out of time but want to continue the discussion.

Chris: I suggest having another call soon. Waiting another month might be too long,
.... as I would like us to make more progress.
.... We'll discuss offline when to schedule it.
.... The next IG call will be April 7, also the MTE TF call on March 16.
.... We'll announce a date for continuing bullet chatting discussion soon.
.... Any last comments or questions?

(none)

Chris: Thanks everyone!

[adjourned]

Summary of Action Items
Summary of Resolutions
[End of minutes]
Minutes formatted by David Booth's scribe.perl version 1.152 (CVS log)
$Date: 2020/03/06 06:34:40 $

Received on Friday, 6 March 2020 08:20:21 UTC