- From: François Daoust via GitHub <sysbot+gh@w3.org>
- Date: Mon, 18 Jun 2018 16:03:24 +0000
- To: public-web-and-tv@w3.org
@Snarkdoof Like @nigelmegitt, I don't necessarily follow you on the performance penalties. Regardless, what I'm getting out of this discussion on subtitles is that there are possible different ways to improve the situation (they are not necessarily exclusive). One possible way would be to have the user agent expose a frame number, or a rational number. This seems simple in theory, but apparently hard to implement. Good thing is that it would probably make it easy to act on frame boundaries, but these boundaries might be slightly artificial (because the user agent will interpolate these values in some cases). Another way would be to make sure that an application can relate `currentTime` to the wall clock, possibly completed with some indication of the downstream latency. This is precisely what was done in the Web Audio API (see the definition of the [`AudioContext` interface](https://webaudio.github.io/web-audio-api/#audiocontext) and notably the `getOutputTimestamp()` method and the `outputLatency` property). It seems easier to implement (it may be hard to compute the output latency, but adding a timestamp whenever `currentTime` changes seems easy). Now an app will still have some work to do to detect frame boundaries, but at least we don't ask the user agent to report possibly slightly incorrect values. I note that this thread started with Non-Linear Editors. If someone can elaborate on scenarios there and why frame numbers are needed there, that would be great! Supposing they are, does the application need to know the exact frame being rendered during media playback or is is good enough if that number is only exact when the media is paused/seeked? -- GitHub Notification of comment by tidoust Please view or discuss this issue at https://github.com/w3c/media-and-entertainment/issues/4#issuecomment-398106309 using your GitHub account
Received on Monday, 18 June 2018 16:03:31 UTC