Re: [MSE] New Proposal for Bug 20901

On Fri, May 3, 2013 at 6:12 AM, Robert O'Callahan <robert@ocallahan.org>wrote:

> On Thu, May 2, 2013 at 10:39 AM, Aaron Colwell <acolwell@google.com>wrote:
>
>> *Use Case B: Appending media into a continuous sequence w/o knowledge of
>> internal timestamps.*
>>  Some applications want to create a presentation by concatenating media segments
>> from different sources without knowledge of the timestamps inside the segments.
>> Each media segment appended should be placed, in the presentation timeline,
>> immediately after the previously appended segment independent of what
>> the internal timestamps are. At the beginning of each media segment, a new
>> timestampOffset value is calculated so that the timestamps in the media segment
>> will get mapped to timestamps that immediately follow the end of the previous
>> media segment.
>>
>> *Use Case C: Place media at a specific location in the timeline w/o
>> knowledge of internal timestamps.*
>>  This is related to Use Case B. This case is useful for placing media
>> segments from a third party in the middle of a presentation. It also
>> allows an application that receives media segments from a live source to
>> easily map the first segment received to presentation time 0.
>>
>
> Maybe for these two use-cases it would be more convenient to create a
> first-class API for seamlessly chaining together resources (or subsegments
> of resources) loaded from several independent media elements? Over a year
> ago I prototyped a MediaStreams-based API for doing this, but there are
> other ways to do it too --- anything that lets an author say "play this
> element for T1 seconds, seamlessly followed by this other element for T2
> seconds" etc. If it lets the application avoid manipulating buffers of
> compressed data, this style of API would be much more convenient than using
> MSE for those use-cases.
>

I agree that if we only cared about these two cases a simpler API could be
used. I only split these out as separate cases because it highlights some
functionality we get for free. These cases are a subset of what is needed
to support MPEG2-TS and HLS. I felt that it was useful to call them out in
a slightly different context so that people would realize that they could
be used for ad-insertion and live broadcasts as well.

Is MSE the simplest API to address these two cases? No. My point is that
MSE is able to address them with functionality that we are already building
for adaptive streaming.

Aaron

Received on Friday, 3 May 2013 15:21:47 UTC