- From: Olivier Thereaux <notifications@github.com>
- Date: Wed, 11 Sep 2013 07:29:43 -0700
- To: WebAudio/web-audio-api <web-audio-api@noreply.github.com>
- Message-ID: <WebAudio/web-audio-api/issues/122/24244320@github.com>
> [Original comment](https://www.w3.org/Bugs/Public/show_bug.cgi?id=17359#1) by Marcus Geelnard (Opera) on W3C Bugzilla. Tue, 12 Jun 2012 09:13:59 GMT The change mostly covers the questions asked. Feedback on the new changes: - The algorithm defined in the C++ (?) function calculateNormalizationScale would be much better defined in pseudo code, and could probably be more compact. The code also seems to depend on internal data structures specific to a particular implementation. - The text "A mono, stereo, or 4-channel <code>AudioBuffer</code> containing the (possibly multi-channel) impulse response" is confusing. What does "possibly multi-channel" mean in this context? Can a mono AudioBuffer be multi-channel? - Editorial: "Normative requirements for multi-channel convolution matrixing are described <a href="#Convolution-reverb-effect">here</a>". Please don't use "here"-links. - It is unspecified what should happen if you first set the buffer attribute to an AudioBuffer "buf", and later make changes to your locally referenced "buf" (or, for that matter, make modifications directly to the array returned by buffer.getChannelData(k)). --- Reply to this email directly or view it on GitHub: https://github.com/WebAudio/web-audio-api/issues/122#issuecomment-24244320
Received on Wednesday, 11 September 2013 14:30:18 UTC