Re: MediaStreamAudioSourceNode when there's more than one audio track

On Wed, Jun 12, 2013 at 8:58 PM, Robert O'Callahan <robert@ocallahan.org>wrote:

> On Thu, Jun 13, 2013 at 3:48 PM, Chris Rogers <crogers@google.com> wrote:
>
>> On Wed, Jun 12, 2013 at 8:26 PM, Robert O'Callahan <robert@ocallahan.org>wrote:
>>
>>> The spec currently says "This interface represents an audio source from
>>> a MediaStream. The first AudioMediaStreamTrack from the MediaStream will be
>>> used as a source of audio." Wouldn't it make more sense to use all the
>>> enabled audio tracks, mixed together? That's what people will hear if they
>>> feed the MediaStream into an <audio> element.
>>>
>>
>> Good question.  The idea was to be able to do the mixing in the
>> AudioContext on a per-track basis.  We really need to be able to create
>> this node given a specific AudioMediaStreamTrack as well as a MediaStream
>> to have this fine a level of control.
>>
>
> You don't, because you can create a MediaStream that contains a single
> AudioStreamTrack taken from some other MediaStream, and make that the input
> to your MediaStreamAudioSourceNode.
>
> However, it would be simpler and easy to implement to have an overload of
> createMediaStreamSource that takes an AudioStreamTrack instead of a
> MediaStream. And have the MediaStream version mix the tracks. Sound good?
>

Yes I agree about the overload.  Also, the mixing of all enabled tracks
seems to be the right thing.


>
> Rob
> --
> q“qIqfq qyqoquq qlqoqvqeq qtqhqoqsqeq qwqhqoq qlqoqvqeq qyqoquq,q
> qwqhqaqtq qcqrqeqdqiqtq qiqsq qtqhqaqtq qtqoq qyqoquq?q qEqvqeqnq
> qsqiqnqnqeqrqsq qlqoqvqeq qtqhqoqsqeq qwqhqoq qlqoqvqeq qtqhqeqmq.q qAqnqdq
> qiqfq qyqoquq qdqoq qgqoqoqdq qtqoq qtqhqoqsqeq qwqhqoq qaqrqeq qgqoqoqdq
> qtqoq qyqoquq,q qwqhqaqtq qcqrqeqdqiqtq qiqsq qtqhqaqtq qtqoq qyqoquq?q
> qEqvqeqnq qsqiqnqnqeqrqsq qdqoq qtqhqaqtq.q"
>

Received on Thursday, 13 June 2013 19:50:35 UTC