- From: Robert O'Callahan <robert@ocallahan.org>
- Date: Thu, 13 Jun 2013 15:58:42 +1200
- To: Chris Rogers <crogers@google.com>
- Cc: "public-audio@w3.org" <public-audio@w3.org>
- Message-ID: <CAOp6jLa1N3mNTi_VG40ZzV=-wn4K5PSvgbQRxZB3itDWE2z+1Q@mail.gmail.com>
On Thu, Jun 13, 2013 at 3:48 PM, Chris Rogers <crogers@google.com> wrote: > On Wed, Jun 12, 2013 at 8:26 PM, Robert O'Callahan <robert@ocallahan.org>wrote: > >> The spec currently says "This interface represents an audio source from a >> MediaStream. The first AudioMediaStreamTrack from the MediaStream will be >> used as a source of audio." Wouldn't it make more sense to use all the >> enabled audio tracks, mixed together? That's what people will hear if they >> feed the MediaStream into an <audio> element. >> > > Good question. The idea was to be able to do the mixing in the > AudioContext on a per-track basis. We really need to be able to create > this node given a specific AudioMediaStreamTrack as well as a MediaStream > to have this fine a level of control. > You don't, because you can create a MediaStream that contains a single AudioStreamTrack taken from some other MediaStream, and make that the input to your MediaStreamAudioSourceNode. However, it would be simpler and easy to implement to have an overload of createMediaStreamSource that takes an AudioStreamTrack instead of a MediaStream. And have the MediaStream version mix the tracks. Sound good? Rob -- q“qIqfq qyqoquq qlqoqvqeq qtqhqoqsqeq qwqhqoq qlqoqvqeq qyqoquq,q qwqhqaqtq qcqrqeqdqiqtq qiqsq qtqhqaqtq qtqoq qyqoquq?q qEqvqeqnq qsqiqnqnqeqrqsq qlqoqvqeq qtqhqoqsqeq qwqhqoq qlqoqvqeq qtqhqeqmq.q qAqnqdq qiqfq qyqoquq qdqoq qgqoqoqdq qtqoq qtqhqoqsqeq qwqhqoq qaqrqeq qgqoqoqdq qtqoq qyqoquq,q qwqhqaqtq qcqrqeqdqiqtq qiqsq qtqhqaqtq qtqoq qyqoquq?q qEqvqeqnq qsqiqnqnqeqrqsq qdqoq qtqhqaqtq.q"
Received on Thursday, 13 June 2013 03:59:10 UTC