Re: Minutes from W3C M&E IG monthly call 7 Jan 2020: Bullet Chatting



On 14/01/2020, 10:28, "Kazuyuki Ashimura" <ashimura@w3.org> wrote:

    On Mon, 13 Jan 2020 23:09:57 +0900,
    Nigel Megitt wrote:
    > 
    > [1  <text/plain; utf-8 (base64)>]
    > Dear all,
    > 
    > Bullet chatting got a brief AOB mention during yesterday's TTWG call. TTWG would like to help the Bullet Chatting task force understand the landscape of TTML2 and WebVTT better and can set aside some meeting time to discuss it.
    > 
    > I would propose that we use the period 1500-1600 UTC prior to the normal 1600 start time of our normal weekly Thursday call – this could be done this coming Thursday 16th January for example, or another week.
    > 
    > I will schedule it for January 16th by default however if anyone would be interested in this and has a preference for a different date please let me know. It would be easy to move it to a different week (though not January 23rd).
    
    Thanks a lot for proposing an additional call to underatand
    TTML2/WebVTT, Nigel!
    
    It would be great to have a call for that purpose but unfortunately
    I have an overwrapping weekly call at 15UTC on Thursday, and am
    wondering if it's possbile to have a differet day of the week
    or at a different time.
    
    BTW, I think we should clarify the Bullet Chatting TF call slot
    as well and would like to talk with the MEIG Chairs and the TF
    Moderator (Song) about that. Maybe we could have this training
    session during one of the TF calls, couldn't we?

Yes we could, I'm not sure when they are scheduled for?

Nigel
    
    Thanks,
    
    Kazuyuki
    
    
    > The agenda for the 16th January call will be at https://github.com/w3c/ttwg/issues/89 so comments on this agenda topic can be made there.
    > 
    > Kind regards,
    > 
    > Nigel
    > 
    > 
    > From: Chris Needham <chris.needham@bbc.co.uk>
    > Date: Friday, 10 January 2020 at 12:02
    > To: "public-web-and-tv@w3.org" <public-web-and-tv@w3.org>
    > Subject: Minutes from W3C M&E IG monthly call 7 Jan 2020: Bullet Chatting
    > Resent from: <public-web-and-tv@w3.org>
    > Resent date: Friday, 10 January 2020 at 11:59
    > 
    > Dear all,
    > 
    > The minutes from the Media & Entertainment Interest Group call on Tuesday 3rd November are now available [1], and copied below.
    > 
    > The slides [2] and gap analysis [3] are also available. Bullet Chatting use case and API proposal documents are in GitHub [4].
    > 
    > Many thanks to Song Xu and Kaz for helping scribe the call.
    > 
    > Our next call is planned for Tuesday 4th February, which will be a joint call with the Web of Things Interest Group to discuss media related use cases for Web of Things.
    > 
    > Kind regards,
    > 
    > Chris (Co-chair, W3C Media & Entertainment Interest Group)
    > [1] https://www.w3.org/2020/01/07-me-minutes.html

    > [2] https://www.w3.org/2011/webtv/wiki/images/8/83/Bullet_Chatting_TF_07_01_2020_-_v3.pdf

    > [3] https://www.w3.org/2011/webtv/wiki/images/6/60/Bullet_Chatting_Requirements_Gap_Analysis_%E5%BC%B9%E5%B9%95%E9%9C%80%E6%B1%82%E5%88%97%E8%A1%A8.xlsx

    > [4] https://github.com/w3c/danmaku/

    > 
    > --
    > 
    > 
    > W3C
    > - DRAFT -
    > Media and Entertainment IG
    > 07 Jan 2020
    > Attendees
    > 
    > Present
    >     Kaz_Ashimura, Andreas_Tai, Chris_Needham, Francois_Daoust, Gary_Katsevman, Garrett_Singer, Huaqi_Shan, Kasar_Masood, Kazuhiro_Hoya, Larry_Zhao, Pierre-Anthony_Lemieux, Peipei_Guo, Rob_Smith, Song_Xu, Takio_Yamaoka, Will_Law, Fuqiao_Xue, Yajun_Chen, Barbara_Hochgesang, Zhaoxin_Tan, Tatsuya_Igarashi, Nigel_Megitt
    > 
    > Regrets
    > 
    > Chair
    >     Chris, Pierre, Igarashi
    > 
    > Scribe
    >     cpn, Song, kaz
    > 
    > Contents
    > 
    >     Topics
    >         Introduction
    >         Bullet Chatting Data Interchange Format
    >         Agenda
    >         Data Interchange Format Standardization Goal
    >         Minimum Viable Product Requirements
    >         Challenges of Extension based off WebVTT
    >         WebVTT Extended for Animation
    >         WebVTT Extended for Live Video
    >     Summary of Action Items
    >     Summary of Resolutions
    > <cpn> scribenick: cpn
    > 
    > # Introduction
    > 
    > Chris: Happy new year! Welcome to the first M&E IG call for 2020. We decided to use this call to make progress on Bullet Chatting, so this call is dedicated for the TF.
    > .... We've heard the use cases presented before, so today will be more about the detail.
    > .... AOB?
    > 
    > Nigel: If anyone is a TTWG member, please rejoin the group following TTWG re-chartering.
    > Bullet Chatting Data Interchange Format
    > 
    > Song: Huaqi is the coordinator of the Bullet Chatting, and will introduce the data interchange format.
    > 
    > https://www.w3.org/2011/webtv/wiki/images/8/83/Bullet_Chatting_TF_07_01_2020_-_v3.pdf Huaqi's Slides
    > 
    > https://www.w3.org/2011/webtv/wiki/images/6/60/Bullet_Chatting_Requirements_Gap_Analysis_%E5%BC%B9%E5%B9%95%E9%9C%80%E6%B1%82%E5%88%97%E8%A1%A8.xlsx Gap Analysis
    > 
    > <scribe> scribenick: Song
    > 
    > # Agenda
    > 
    > Huaqi: In the agenda, we have 4 topics. Firstly, we discuss why we need to define a bullet chatting data interchange format.
    > .... I'll introduce a proposal to extend WebVTT to support bullet chatting animation, then we can discuss next steps.
    > 
    > # Data Interchange Format Standardization Goal
    > 
    > Huaqi: Why do we need define a bullet chatting data interchange format?
    > .... After gap analysis, we found it is necessary to define a data interchange format standard in order to support multiple scenarios, multiple applications and platforms.
    > .... Bullet chatting is designed to support on-demand video, live video streaming, virtual reality video, 360 degrees video, also support non-video scenarios, interaction within a webpage, interactive wall, etc.
    > .... Bullet chatting supports web app, native app and mini app, etc.
    > 
    > # Minimum Viable Product Requirements
    > 
    > Huaqi: What are the minimum requirements to support bullet chatting?
    > .... There are three main aspects.
    > .... The first one is animation. Bullet chatting supports multiple lines of subtitles displaying at the same time. supports scrolling subtitles, e.g., from the right to the left. also supports scrolling duration.
    > 
    > <xfq> Bullet Chatting Use Cases https://w3c.github.io/danmaku/usecase..html

    > 
    > Huaqi: The second one is that bullet chatting is in connection with media timeline and is displayed in sync with media timeline.
    > .... The third one is the implementation has good scalability to support live video and non video.
    > 
    > # Challenges of Extension based off WebVTT
    > 
    > Huaqi: We propose to extend WebVTT to support bullet chatting since WebVTT has many advantages, simple, light-weighted, mature, browser supportive, easier to extend to multiple applications and platforms.
    > .... We also face some challenges. A least, how to extend WebVTT to support animation and live video. We have some ideas. As far as animation, we may try to extend WebVTT Cue Settings. fFr live video, we can refer to HLS m3u8 which supports both on-demand video and live video.
    > 
    > # WebVTT Extended for Animation
    > 
    > Huaqi: Currently, WebVTT doesn't support animation.
    > .... If we use CSS to implement bullet chatting animation, we need to use the CSS engine to interpret on different applications. It is not so easy.
    > .... So we suggest to extend WebVTT cue settings by using declarative syntax to implement bullet chatting animation.
    > .... Let's have a look at the example:
    > .... This first line with NOTE syntax is WebVTT's existing cue settings, The next line with NOTE syntax is our proposal: the extended cue settings. The main difference is whether there is attribute value, separated by a semicolon.
    > .... Take position as an example, it indicates the horizontal offset. position:50% means in the middle of the screen. position:100%;10% means scrolling from the right to the left 10% offset.
    > .... Currently, WebVTT supports fixed bullet chatting, that's why we choose to extend WebVTT.
    > .... Example: line, postition, align are the attributes which WebVTT already supports.
    > .... line indicates vertical offset, e.g., line:0 indicates on the top of the screen, line:100% indicates the bottom of the screen.
    > .... position indicates vertical offset, position:0 indicates the left of the screen, position:100% indicates the right of the screen.
    > .... align is used to set alignment, optional value is start, middle or end. You can refer to Mozilla Develop Network document for more details.
    > 
    > <cpn> scribenick: cpn
    > 
    > Huaqi: For scrolling bullet chatting, values are separated with a semi-colon, the first value is the start, the second is the end value
    > .... The cue timing can be set, [example of transitioning opacity from 0;1, color:red;blue]
    > .... We can learn a lot from TTML, we appreciate your comments on our proposal to extend WebVTT?
    > 
    > Nigel: You say there isn't UA support for what you need, but there is support in TTML. Why do you want to extend WebVTT? Have you looked at TTML?
    > 
    > <xfq> https://w3c.github.io/danmaku/usecase.html#subtitles hasn't been updated for a while, and we need to update that indeed
    > 
    > <xfq> (we looked into TTML and WebVTT more after writing that section)
    > 
    > Rob: Do you have an example of something supported in TTML?
    > 
    > Nigel: TTML2 has the ability to do animation, for example
    > 
    > <kaz> https://www.w3.org/TR/2018/REC-ttml2-20181108/ TTML2
    > 
    > Nigel: This proposal suggests not to use CSS Animation. It would be interesting to know what the implementation constraints are.
    > 
    > Huaqi: I'll share the gap analysis in detail, this shows the gaps between WebVTT and TTML.
    > .... We preferred WebVTT as it is light-weighted and simpler, and can be implemented more easily when it is migrated to other client like native app, MiniApp etc.
    > .... TTML does support animation. And we think it's a good idea to use declarative syntax to implement animation since CSS animation need be parsed via complex CSS engine.
    > .... So when extending WebVTT, we prefer to resusing the way of TTML supporting animation.
    > 
    > Nigel: An approach is to profile to TTML, e.g., take IMSC and add in the animation parts from TTML2.
    > 
    > <kaz> https://www.w3.org/TR/2018/REC-ttml-imsc1.0.1-20180424/ IMSC1
    > 
    > Nigel: If there's an important functional requirement around size, would be good to know that.
    > 
    > <nigel> Specifically if there's something that needs optimisation to meet the "lightweight" requirement, what is that optimisation with respect to? Document size, speed, implementation size?
    > 
    > <Song> scribenick: Song
    > 
    > # WebVTT Extended for Live Video
    > 
    > Huaqi: We think WebVTT has limits in support for live video.
    > .... Bullet chatting need support live video so we face some challenge and we extend WebVTT to support. How to support live video via WebVTT?
    > .... One option, we may refer to HLS m3u8 fragment. We can define a similar container file which contains several bullet chatting file fragments, e.g., several VTT files, as the example.
    > .... By default WebVTT cue timings are in connection with media timeline. In the live video scenario, there is no timeline, how can we do?
    > .... We have one idea. We don't use cue timings in live video and only use basic animation and bullet chatting. And rendering will be done by user agent.
    > .... The user agent has to continuously read the container files to fetch the latest bullet chatting data.
    > .... In this way it can be accelerated via CDN, it is easier.
    > 
    > <cpn> scribenick: cpn
    > 
    > Huaqi: With HTTP-FLV, can keep a persistent connection and push bullet chatting data, works for low latency, but needs a protocol defining.
    > 
    > Rob: In what way does WebVTT not support live video?
    > 
    > Huaqi: Referring to the use case document, the live streaming interaction use case (4.2) isn't supported. Although technically it can be OK,but The WebVTT document doesn't define any information for live updates.
    > 
    > <Yajun_Chen> https://w3c.github.io/danmaku/usecase.html#live-streaming-interaction

    > 
    > <nigel> RobSmith, there is no inbuilt delivery mechanism for live updates of WebVTT at all
    > 
    > <nigel> there's no semantic model for it
    > 
    > <Larry_Zhao> +q
    > 
    > Rob: I think this is supported
    > 
    > Larry: The WebVTT effects rely on cue timing, from one time to another time.. So we need to know the exact time, this is why we say it doesn't support live streaming
    > .... The VTTCue has 3 parameters: start time, end time, and the content to add.
    > 
    > Nigel: This is confusing the WebVTT document format and the VTTCue interface. We should separate the API and the document, as these have different capabilities.
    > 
    > Rob: We're looking at live streaming with DataCue in WICG.
    > 
    > Nigel: The WebVTT document doesn't specify any support for live updates
    > 
    > Gary: There's nothing specifically about live video, but that doesn't mean you can't do it. You can chunk up the WebVTT.
    > 
    > Kaz: We don't have to mention WebVTT within the requirements description here, focus on the requirements, can do the detailed gap analysis with WebVTT, etc., later.
    > 
    > Huaqi: We have prioritised the requirements gaps
    > .... The first is to support writing real time data and a web API reading this data for VoD and live video
    > .... We found that neither WebVTT nor TTML support non-video.
    > 
    > Nigel: You could do it differently, if you have a different source of time.
    > 
    > <xfq> Is it possible to use TTML without video in HTML?
    > 
    > <nigel> That depends on your TTML player xfq.
    > 
    > <nigel> There's no reason why not, as long as you have a time source.
    > 
    > <xfq> I see. Thanks nigel.
    > 
    > Igarashi: Is the requirement to render bullet chatting without video? Render with audio, for example?
    > 
    > Huaqi: We want to support non-video: e.g., use case 4.5 Interaction with a web page
    > 
    > Igarashi: So the rendering doesn't use a video timeline?
    > 
    > Huaqi: That's right
    > 
    > Igarashi: So all the rendering uses it's own bullet chat timeline?
    > 
    > Huaqi: There's no timeline. The comments are rendered when they are sent, and display speed is set by the user.
    > 
    > <xfq> here's a demo: https://w3c.github.io/danmaku/demos/no-media/

    > 
    > Huaqi: Another scenario is 4.6 Interactive wall, which is similar
    > 
    > Gary: Is the timing model wall clock time?
    > 
    > Huaqi: The idea is we don't use cue times in this scenario in the proposal
    > 
    > <tidoust> [I note "en passant" that the Timing Object spec explores the idea of exposing a timing object independent of (or possibly connected to) a media element, precisely to allow scenarios such as interactive walls: http://webtiming.github.io/timingobject/]
    > 
    > Larry: For live video, we can use m3u8, with a WebVTT index file. The live example only has cue settings, no times.
    > 
    > Igarashi: What is the accuracy of the time synchronisation between the video and the bullet chatting?
    > 
    > Larry: In our real web application, the bullet chatting doesn't synchronise with the timeline. For live video, we can't guarantee the synchronisation
    > 
    > <Song> scribenick: Song
    > 
    > Fuqiao: Displaying bullet chats have higher priority than time synchronization in live streaming, even if time synchronization is possible technically.. That's to say, accuracy of synchronization isn't so important. But in the on-demand video, we should keep the synchronisation between the bullet chatting and video timeline exactly..
    > 
    > <scribe> scribenick: cpn
    > 
    > Igarashi: So accuracy of synchronisation isn't so important
    > 
    > <kaz> https://w3c.github.io/danmaku/usecase.html#interactive-wall Bullet Chatting Use Cases - 4.6 Interactive wall
    > 
    > Kaz: We can follow up to clarify those requirements for synchronisation for these use cases
    > 
    > Huaqi: We can go through the gap analysis at the next meeting
    > 
    > <scribe> scribenick: Song
    > 
    > Huaqi: After discussion of data interchange format, we plan to discuss the rendering of data interchange format.
    > .... We need to define the rendering rules,
    > .... and how to support non video scenarios.
    > .... In the bullet chatting API proposal, we define new component and , how to render?
    > .... Bullet chatting needs to support images, how to extend WebVTT to display image in bullet chatting? Can we re-use the tag?
    > .... Besides, we need new APIs to add real-time bullet chatting, and setting the duration of bullet chatting.
    > 
    > <kaz> scribenick: kaz
    > 
    > Igarashi: Based on today's discussion, there are generic requirements as well.
    > .... So discussing those generic issues should be split
    > .... and we should focus on requirements for bullet chatting itself.
    > 
    > Kaz: +1
    > .... We should add clarifications to the description of the use cases and requirements a bit more.
    > 
    > Pierre: I would encourage everybody to send your questions/comments on the reflector.
    > .... Also we could schedule a follow-up discussion.
    > 
    > Song: I agree
    > 
    > <RobSmith> Can you post a link to the reflector please?
    > 
    > [ the MEIG list is public-web-and-tv@w3.org ]
    > 
    > <tidoust> https://github.com/w3c/danmaku/issues (related GitHub issue, fyi)
    > 
    > Kaz: let's continue the discussion about how to deal with the GitHub issues as well on the MEIG mailing list as Pierre suggested
    > 
    > Pierre: The MEIG Chairs will organize the discussion.
    > .... We need to ask questions on the reflector.
    > 
    > Chris: I agree. Do we have a plan to meet again as the TF?
    > 
    > Huaqi: Yes, and we can continue the TF work.
    > 
    > <igarashi> ok
    > 
    > Pierre: Let's continue the discussion on the reflector.
    > 
    > Chris: Thanks for presenting this and making progress, Huaqi.
    > .... The next MEIG call will be Feb. 4,
    > .... a joint call with the WoT WG.
    > .... We're planning a slightly longer call so that we can also discuss MEIG topics.
    > .... Will make an announcement about the detail.
    > .... Anything else?
    > 
    > Huaqi: Note that there will be Chinese New Year holidays the end of January, so the TF call would be early February
    > 
    > Chris: Good idea, thank you!
    > 
    > [adjourned]
    > Summary of Action Items
    > Summary of Resolutions
    > [End of minutes]
    > Minutes formatted by David Booth's scribe.perl version 1.152 (CVS log)
    > $Date: 2020/01/10 04:27:09 $
    > [2  <text/html; utf-8 (base64)>]
    

Received on Tuesday, 14 January 2020 11:08:58 UTC