RE: Clarification of addSourceBuffer() method

> I didn't read it that way. Might be worth clarifying.

Can you please file an MSE bug giving a proposal for text that you would find clearer?

/paulc

Paul Cotton, Microsoft Canada
17 Eleanor Drive, Ottawa, Ontario K2E 6A3
Tel: (425) 705-9596 Fax: (425) 936-7329


-----Original Message-----
From: Silvia Pfeiffer [mailto:silviapfeiffer1@gmail.com] 
Sent: Monday, October 20, 2014 7:42 AM
To: Cyril Concolato
Cc: <public-html-media@w3.org>
Subject: Re: Clarification of addSourceBuffer() method

On Mon, Oct 20, 2014 at 9:10 PM, Cyril Concolato <cyril.concolato@telecom-paristech.fr> wrote:
> Le 20/10/2014 11:36, Silvia Pfeiffer a écrit :
>
>> On Mon, Oct 20, 2014 at 6:48 PM, Cyril Concolato 
>> <cyril.concolato@telecom-paristech.fr> wrote:
>>>
>>> Le 20/10/2014 03:27, Silvia Pfeiffer a écrit :
>>>>
>>>> I'm just taking a look at the MSE spec (writing a book chapter 
>>>> about it, actually).
>>>>
>>>> I'm looking at MediaSource.addSourceBuffer() which to me seems to 
>>>> be the key method to add chunks of a media resource to a 
>>>> MediaSource object.
>>>>
>>>> I'm reading the following in the spec:
>>>>
>>>> ``
>>>> addSourceBuffer
>>>>
>>>> Adds a new SourceBuffer to sourceBuffers.
>>>>
>>>> Implementations must support at least 1 MediaSource object with the 
>>>> following SourceBuffer configurations. MediaSource objects must 
>>>> support each of the configurations below, but they are only 
>>>> required to support one configuration at a time. Supporting 
>>>> multiple configurations at once or additional configurations is a 
>>>> quality of implementation issue.
>>>>
>>>> * A single SourceBuffer with 1 audio track and/or 1 video track.
>>>>
>>>> * Two SourceBuffers with one handling a single audio track and the 
>>>> other handling a single video track.
>>>> ``
>>>>
>>>> It seems that a SourceBuffer can only have either an interleaved 
>>>> audio/video track, or just audio or just video. I'm a bit confused 
>>>> about that, because SourceBuffer clearly talks about multiple audio 
>>>> and video tracks, and also about text tracks.
>>>
>>> In theory, a SourceBuffer may indeed correspond to many multiplexed 
>>> streams (audio(s)+video(s)+text track(s)+metadata track(s)). The 
>>> text you quoted indicates minimal implementation requirements. 
>>> Implementations are free to support more than that.
>>
>> Ah thanks for clarifying.
>>
>> It's still quite confusing actually. In particular the point about 
>> returning two SourceBuffers where the return value of the method is 
>> merely a SourceBuffer object. Can you explain how that is going to 
>> work, too?
>
> addSourceBuffer always returns a single SourceBuffer. The text above 
> is meant to say that implementations should support either one call to 
> addSourceBuffer for multiplexed a/v streams or two calls for separate 
> audio and video streams and that it may throw a QUOTA_EXCEEDED_ERROR 
> on the next calls.

Ah, thanks, that makes sense. I didn't read it that way. Might be worth clarifying.

Thanks,
Silvia.

Received on Monday, 20 October 2014 12:21:56 UTC