- From: Olivier Thereaux <notifications@github.com>
- Date: Wed, 11 Sep 2013 07:29:40 -0700
- To: WebAudio/web-audio-api <web-audio-api@noreply.github.com>
- Message-ID: <WebAudio/web-audio-api/issues/112/24244292@github.com>
> [Original comment](https://www.w3.org/Bugs/Public/show_bug.cgi?id=17396#6) by Chris Rogers on W3C Bugzilla. Wed, 16 Jan 2013 01:11:57 GMT (In reply to [comment #6](#issuecomment-24244285)) > I'm also really unclear on this. The current draft includes some downmix > matricies, but doesn't say when they are to be used. > > Mapping to a smaller number of output channels inside an > AudioDestinationNode is the only obvious place I see, but this is difficult > to apply consistently since surround playback is often supported on systems > without multichannel playback hardware, with the OS doing its own > downmixing. In other words, when "downmixing should be supported" which > software layer should be doing the supporting? > > If setting numberOfChannels asks AudioDestinationNode to up/down mix to a > particular number of output channels (ignoring what the lower layers might > do with this) how do we set it pass-through? > > Should maxNumberOfChannels change in response to configuration changes? (for what it's worth this hasn't yet been implemented in WebKit) Robert and I have been discussing this but don't have all the answers yet. Robert suggests that the up/down mixing would be a property of each node, which could make sense. We've also discussed the idea of a way to query the AudioContext for the hardware channel layout, in addition to just the raw number of channels (.maxNumberOfChannels). Then we would allow the JS code to configure the up/down mixing behavior as it wishes, perhaps using the hardware channel layout information. Robert, can jump in if I've misunderstood. --- Reply to this email directly or view it on GitHub: https://github.com/WebAudio/web-audio-api/issues/112#issuecomment-24244292
Received on Wednesday, 11 September 2013 14:30:28 UTC