- From: <bugzilla@jessica.w3.org>
- Date: Tue, 12 Jun 2012 09:14:00 +0000
- To: public-audio@w3.org
https://www.w3.org/Bugs/Public/show_bug.cgi?id=17359 Marcus Geelnard (Opera) <mage@opera.com> changed: What |Removed |Added ---------------------------------------------------------------------------- Status|RESOLVED |REOPENED CC| |mage@opera.com Resolution|FIXED | --- Comment #2 from Marcus Geelnard (Opera) <mage@opera.com> 2012-06-12 09:13:59 UTC --- The change mostly covers the questions asked. Feedback on the new changes: - The algorithm defined in the C++ (?) function calculateNormalizationScale would be much better defined in pseudo code, and could probably be more compact. The code also seems to depend on internal data structures specific to a particular implementation. - The text "A mono, stereo, or 4-channel <code>AudioBuffer</code> containing the (possibly multi-channel) impulse response" is confusing. What does "possibly multi-channel" mean in this context? Can a mono AudioBuffer be multi-channel? - Editorial: "Normative requirements for multi-channel convolution matrixing are described <a href="#Convolution-reverb-effect">here</a>". Please don't use "here"-links. - It is unspecified what should happen if you first set the buffer attribute to an AudioBuffer "buf", and later make changes to your locally referenced "buf" (or, for that matter, make modifications directly to the array returned by buffer.getChannelData(k)). -- Configure bugmail: https://www.w3.org/Bugs/Public/userprefs.cgi?tab=email ------- You are receiving this mail because: ------- You are on the CC list for the bug.
Received on Tuesday, 12 June 2012 09:15:42 UTC