- From: Ingar Mæhlum Arntzen <ingar.arntzen@gmail.com>
- Date: Mon, 17 Feb 2020 12:17:14 +0100
- To: François Daoust <fd@w3.org>
- Cc: 赵磊 <zhaolei@migu.cn>, "public-web-and-tv@w3.org" <public-web-and-tv@w3.org>
- Message-ID: <CAOFBLLp9eUrOVWuVLKXauGiiqtpnp+KLJh+oaCRpVnusKrZ8xw@mail.gmail.com>
Excellent point Francois! There are plenty of use cases where it is necessary to sequence cues without having a video element to do the sequencing. A - It is obviously important if a media experience is not made from video/audio content at all B - It is also important when non video content is to be played back in synchrony with a video, yet the video lives somewhere else (e.g in a different UI element, iframe, window, browser or device, or even on a non-Web platform) C - Equally important, in cases where sequenced cues represent individual videos to be played it might be very unpractical to create a single hidden "master" video to do the sequencing. This would the basis for doing advanced video mashups really easily and flexibly, without having to modify a single video file. So, from this it should be clear that a track like sequencing functionality -- **independent** from video/audio elements -- is an important part of the Web's support for media (yet lacking!). While we do have JS polyfills for this gap (e.g. [1]), it still remains a question if the user agent should support this too. There might be a pedagogical point here as well. People go to web standards for advise on how to do these things, and the only thing they will find is video-backed sequencing. Then they immediately run into troubles if their case somewhat aligns with A, B, and C. I guess the "interactive wall" case could be one example among many. It seems also that these troubles tend to inspire new calls for standardization covering yet another "specific" use case, running the risk of not convincing anyone. I guess I don't see these things as different corner cases, I see them of as different indications of the same underlying problem, that sequencing logic (and media control too by the way) is not available in the Web, except bundled with video/audio. Please address the underlying problem, not the symptoms :) Best regards, Ingar Arntzen [1] https://webtiming.github.io/timingsrc/ man. 17. feb. 2020 kl. 11:03 skrev François Daoust <fd@w3.org>: > Le 11/02/2020 à 13:48, 赵磊 a écrit : > > Hello all, > > > > Since there are still some questions left over from the last meeting > > about the "Interactive Wall” use case, so I wanna make a small comment > > on it and try to make it clear :-) > > > > First, I would like to briefly explain what “Interactive Wall” is. > > Let's say you have a laptop connected to a LCD display with a HDMI > > cable. Under this circumstance, the laptop is your primary display, and > > the LCD display is your secondary display. > > > > Now you open a browser (Firefox, Google Chrome etc.) and open an HTML > > file, then drag the browser to the secondary display and make the > > browser to enter **FULLSCREEN** mode to hide the browser menu and > > address bar. > > > > The HTML file mentioned above is very simple, it contains **NO VIDEO** > > and just some comments or tweets. it can load the comments or tweets at > > regular intervals by using Ajax or WebSocket from remote server and > > render the comments in the “Bullet Chatting” way. This is how the > > “Interactive Wall” scenario works. > > > > You may wonder how the user sends comments to the “Interactive Wall”. It > > depends. As long as the remote server exposes some sort of HTTP API > > (REST API etc.), you can send comments in various ways. For example, we > > can have another HTML file and there is an input field in it, users can > > type text and click the enter button, and send the comments to remote > > server. > > > > Now let’s return to the “interchange format” topic. In fact, > > “Interactive Wall” is a Bullet Chatting *rendering* use case, not a use > > case for “interchange format”. Of course, “Interactive Wall” can > > leverage the “interchange format” to do the rendering, since there may > > be color and other styling options in the “interchange format”. > > > > “Interactive Wall” is a special scenario, there is no video in this > > scenario, and we found few cases in native apps (Android, iOS etc). We > > are actively listening to community feedback. This use case does not > > bring new technical requirements (it can be achieved with simple > > HTML/CSS/JavaScript code). It is a non-video use case used in browser > > (we're not aware of any non-browser implementations for this use case), > > and it is not a common use case comparing with the video use cases, so > > if it causes a lot of confusion, and we can remove this scenario from > > the use cases document if necessary. Highly appreciate your feedback! > > Whether the "Interactive wall" use case is important, I cannot judge. > However, I note that, if we envision that the rendering of bullet > comments should be based on cues, that is if we want the user agent to > render the cues on its own, or at a minimum to sequence the cues (with > the actual rendering being done by the application), then the use case > actually highlights one technical gap. > > As far as I know, the only way to make a user agent process cues is to > go through a media element (audio or video), and this only works > provided that there is some audio or video to play. In other words, a > media element cannot play if it only has a text track, there needs to be > some audio of video track. > > To implement an "Interactive Wall", the Web application has to do the > sequencing (and rendering) on its own. That is possible, but means that > this use case cannot be treated the exact same way as other bullet > chatting use cases. > > Francois. > >
Received on Monday, 17 February 2020 11:17:40 UTC