[media-source] Delayed enabling or adding of tracks/streams

DanielBaulig has just created a new issue for https://github.com/w3c/media-source:

== Delayed enabling or adding of tracks/streams ==
We are currently exploring starting video playback without having to load audio when the video is muted and only adding an audio track / stream to the video later when it is unmuted. The goal is to use less data and provide a quicker and smoother playback experience especially in low bandwidth and high latency environments.

We explored three potential options, but all of them run into restrictions imposed by the specification or implementations. Let me outline the three options that we explored first and what problems they have:

1) Start playing with a single video SourceBuffer and add an additional audio SourceBuffer later when needed.
This is possible in theory, but the [specification very clearly states](https://www.w3.org/TR/media-source/#h-note10) that the UA may throw an exception if the media element has reached a HAVE_METADATA state or if the UA does not support adding additional tracks during playback. In practice, relevant UAs will throw an exception if we attempt this.

2) Create both audio and video SourceBuffer, but set the audio SourceBuffer to a mode of 'sequential' and repeatedly append a silent dummy audio segment to fill the audio buffer with inaudible audio data. The idea was to fetch actual audio data and switch the audio SourceBuffer back to 'segment' and append the actual audio data, once the video is unmuted. However, switching from 'sequential' to 'segment' will throw an [exception according to spec](https://www.w3.org/TR/media-source/#dom-sourcebuffer-mode)

3) Use the enabled attribute specified as part of AudioTracks to disable the only audio track in the audio SourceBuffer and only re-enable it once the video is unmuted. In theory, to our understanding, this should be a spec compliant way of achieving what we are looking to do, but in practice, relevant UAs do not implement the Video- and/or AudioTracks. From a chat with some UA vendors in the past it sounded like there are no concrete plans across vendors to actually implement the Tracks APIs.

- Why does the specification not allow switching back to 'segment' mode once the SourceBuffer was in 'sequence' mode?
- Why does the specification allow UAs to throw an exception when adding a SourceBuffer during playback? Is this something that is practically hard or maybe even impossible to implement?
- Do browser vendors indeed not intent to implement the AudioTracks API?
- Are there any other ways to achieve what we would like to do? 
- If not, do people on this list see value in being able to achieve what we are trying to do? 
And if so, are there any suggestions on how the spec could be changed in the future to allow for something like this?

Thanks!

Please view or discuss this issue at https://github.com/w3c/media-source/issues/210 using your GitHub account

Received on Tuesday, 10 April 2018 22:26:14 UTC