- From: Google <sysbot+gh@w3.org>
- Date: Tue, 12 Jun 2018 19:13:01 +0000
- To: public-web-and-tv@w3.org
Remote playback cases are always going to be best effort to keep the video element in sync with the remote playback state. For video editing use cases, remote playback is not as relevant (except maybe to render the final output). There are a number of implementation constraints that are going to make it challenging to provide a completely accurate instantaneous frame number or presentation timestamp in a modern browser during video playback. - The JS event loop will run in a different thread than the one painting pixels on the screen. There will be buffering and jitter in the intermediate thread hops. - The event loop often runs at a different frequency than the underlying video, so frames will span multiple loops. - Video is often decoded, painted, and composited asynchronously in hardware or software outside of the browser. There may not be frame-accurate feedback on the exact paint time of a frame. Some estimates could be made based on knowing the latency of the downstream pipeline. It might be more useful to surface the last presentation timestamp submitted to the renderer and the estimated latency until frame paint. It may also be more feasible to surface the final presentation timestamp/time code when a seek is completed. That seems more useful from a video editing use case. Understanding the use cases here and what exactly you need know would help guide concrete feedback from browsers. -- GitHub Notification of comment by mfoltzgoogle Please view or discuss this issue at https://github.com/w3c/media-and-entertainment/issues/4#issuecomment-396701652 using your GitHub account
Received on Tuesday, 12 June 2018 19:13:07 UTC