Re: [MSE] New Proposal for Bug 20901

Le 03/05/2013 17:21, Aaron Colwell a écrit :
> On Fri, May 3, 2013 at 6:12 AM, Robert O'Callahan 
> <robert@ocallahan.org <mailto:robert@ocallahan.org>> wrote:
>
>     On Thu, May 2, 2013 at 10:39 AM, Aaron Colwell
>     <acolwell@google.com <mailto:acolwell@google.com>> wrote:
>
>         *Use Case B: Appending media into a continuous sequence w/o
>         knowledge of internal timestamps.*
>          Some applications want to create a presentation by
>         concatenating media segments from different sources without
>         knowledge of the timestamps inside the segments. Each media
>         segment appended should be placed, in the presentation
>         timeline, immediately after the previously appended segment
>         independent of what the internal timestamps are. At the
>         beginning of each media segment, a new timestampOffsetvalue is
>         calculated so that the timestamps in the media segment will
>         get mapped to timestamps that immediately follow the end of
>         the previous media segment.
>
>         *Use Case C: Place media at a specific location in the
>         timeline w/o knowledge of internal timestamps.*
>          This is related to Use Case B. This case is useful for
>         placing media segments from a third party in the middle of a
>         presentation. It also allows an application that receives
>         media segments from a live source to easily map the first
>         segment received to presentation time 0.
>
>
>     Maybe for these two use-cases it would be more convenient to
>     create a first-class API for seamlessly chaining together
>     resources (or subsegments of resources) loaded from several
>     independent media elements? Over a year ago I prototyped a
>     MediaStreams-based API for doing this, but there are other ways to
>     do it too --- anything that lets an author say "play this element
>     for T1 seconds, seamlessly followed by this other element for T2
>     seconds" etc. If it lets the application avoid manipulating
>     buffers of compressed data, this style of API would be much more
>     convenient than using MSE for those use-cases.
>
>
> I agree that if we only cared about these two cases a simpler API 
> could be used. I only split these out as separate cases because it 
> highlights some functionality we get for free.
I agree with Roc that the seamless playback with a single 
HTMLMediaElement of chained resources would be very useful. That's what 
I meant when I said that "MSE could be viewed as an API to create a 
playlist" [1]. MSE does provide that to a certain extent. For instance, 
people are asking to chain non fragmented MP4 and it is not currently 
possible with MSE.

> These cases are a subset of what is needed to support MPEG2-TS and 
> HLS. I felt that it was useful to call them out in a slightly 
> different context so that people would realize that they could be used 
> for ad-insertion and live broadcasts as well.
>
> Is MSE the simplest API to address these two cases? No. My point is 
> that MSE is able to address them with functionality that we are 
> already building for adaptive streaming.
For adaptive streaming, if the content is properly authored (time 
alignment of segments starting with RAP), a simple chaining would also 
work. What MSE brings is a way to handle overlapping segments, useful 
for online editing or in adaptive streaming when the segments are not 
aligned.

Cyril

[1] http://lists.w3.org/Archives/Public/public-html-media/2013Feb/0074.html

-- 
Cyril Concolato
Maître de Conférences/Associate Professor
Groupe Multimedia/Multimedia Group
Telecom ParisTech
46 rue Barrault
75 013 Paris, France
http://concolato.wp.mines-telecom.fr/

Received on Tuesday, 14 May 2013 14:40:40 UTC