W3C home > Mailing lists > Public > public-webrtc@w3.org > November 2011

Re: Of interest to this WG: Media Source API

From: Timothy B. Terriberry <tterriberry@mozilla.com>
Date: Tue, 22 Nov 2011 22:02:33 -0800
Message-ID: <4ECC8C79.3000304@mozilla.com>
CC: "public-webrtc@w3.org" <public-webrtc@w3.org>, Aaron Colwell <acolwell@chromium.org>
>>> FYI - another proposal being proposed somewhere for how to get video
>>> data
>>> into a video element, chunk by chunk. Might be relevant to our (future)
>>> resumption of work on recording.
>>> http://updates.html5rocks.com/2011/11/Stream-video-using-the-MediaSource-API
>> [snip]
>> Is that meant for some of the same use cases as the MediaStream
>> Processing API draft?

There is some overlap. If all you need is the ability to seamlessly 
connect playback between the end of one file and the start of another, 
the MediaStream Processing API can do that and its abstraction is 
cleaner. But that requires you to be served complete, independent, valid 

> The specific use case is apparently streaming video; from some exchange
> of emails I suspect that it was born from a frustration with the amount
> of code that's locked into the browser when trying to do adaptive
> streaming (variable bitrate video) with DASH.

The MediaSource API is roughly equivalent to Flash's appendBytes(), and 
has a few outstanding issues.

There are some rough edges in the connection between the byte ranges and 
time. In particular, it relies on container timestamps, but those may be 
relative to some unknown offset, whereas <video> tag timestamps are all 
relative to 0. In chained Ogg files, these unknown offsets will even 
change for different segments of the file. For things like RIFF (waves), 
they may not exist at all, as the draft points out.

The spec also leaves out a lot of details on the exact format of the 
data it's willing to accept. For example, if you want to provide audio 
and video from separate files (and, for adaptive streaming of things 
like movies, you do, since you don't want a separate copy of the video 
on the server for every alternate language track), then these need to be 
muxed together. Having to do this, for example, on an Ogg page level (or 
a WebM Block level) is clearly going to be a major PITA. My 
understanding from discussions with Aaron at OVC back in September was 
that he's relaxed the muxing rules to some extent from those used for 
real files, but all that needs to be documented (which he said was on 
his TODO list).

And finally the interaction between this API and the media cache needs 
to be specced out. Some people want to be able to seek into 
already-downloaded data without being forced to re-download it, while 
sometimes you might want to re-download it because you have more 
bandwidth available now than you did when it was first downloaded. It 
gets complicated quickly.

So, MediaSource is an interesting idea, but it needs some work yet 
before it's in a state that can be re-implemented compatibly in multiple 
browsers. Aaron, please feel free to correct me if I got any of the 
above wrong.
Received on Wednesday, 23 November 2011 06:03:12 UTC

This archive was generated by hypermail 2.3.1 : Monday, 23 October 2017 15:19:26 UTC