- From: Chris Rogers <crogers@google.com>
- Date: Wed, 12 Jun 2013 20:48:22 -0700
- To: "Robert O'Callahan" <robert@ocallahan.org>
- Cc: "public-audio@w3.org" <public-audio@w3.org>
Received on Thursday, 13 June 2013 03:48:49 UTC
On Wed, Jun 12, 2013 at 8:26 PM, Robert O'Callahan <robert@ocallahan.org>wrote: > The spec currently says "This interface represents an audio source from a > MediaStream. The first AudioMediaStreamTrack from the MediaStream will be > used as a source of audio." Wouldn't it make more sense to use all the > enabled audio tracks, mixed together? That's what people will hear if they > feed the MediaStream into an <audio> element. > Good question. The idea was to be able to do the mixing in the AudioContext on a per-track basis. We really need to be able to create this node given a specific AudioMediaStreamTrack as well as a MediaStream to have this fine a level of control. Maybe if given a MediaStream, all the tracks should be mixed together instead of taking the first track as you suggest. > > Rob > -- > q“qIqfq qyqoquq qlqoqvqeq qtqhqoqsqeq qwqhqoq qlqoqvqeq qyqoquq,q > qwqhqaqtq qcqrqeqdqiqtq qiqsq qtqhqaqtq qtqoq qyqoquq?q qEqvqeqnq > qsqiqnqnqeqrqsq qlqoqvqeq qtqhqoqsqeq qwqhqoq qlqoqvqeq qtqhqeqmq.q qAqnqdq > qiqfq qyqoquq qdqoq qgqoqoqdq qtqoq qtqhqoqsqeq qwqhqoq qaqrqeq qgqoqoqdq > qtqoq qyqoquq,q qwqhqaqtq qcqrqeqdqiqtq qiqsq qtqhqaqtq qtqoq qyqoquq?q > qEqvqeqnq qsqiqnqnqeqrqsq qdqoq qtqhqaqtq.q" >
Received on Thursday, 13 June 2013 03:48:49 UTC