- From: Olivier Thereaux <notifications@github.com>
- Date: Wed, 11 Sep 2013 07:29:38 -0700
- To: WebAudio/web-audio-api <web-audio-api@noreply.github.com>
- Message-ID: <WebAudio/web-audio-api/issues/104/24244275@github.com>
> [Original comment](https://www.w3.org/Bugs/Public/show_bug.cgi?id=17366#6) by Chris Wilson on W3C Bugzilla. Tue, 03 Sep 2013 21:08:51 GMT (In reply to [comment #6](#issuecomment-24244250)) > Of course there need to ripples. But do we need to pre-duck the waveform to > avoid clipping during naive playback, or can that be a problem for content > authors? This API uses float samples, so there's no problem with excursions > beyond 1.0 in the oscnode output; it can be adjusted later by a gain node, > etc. It certainly CAN be a problem for content authors; if they don't adjust with a gain node at some point in the chain, they WILL get clipping distortion. On the plus side, a -1 to 1 oscillator is so relatively loud that I expect most developers DO adjust with a gain node already. :) Much like de-zippering, it's really in how much we want to try to make the default case sound good, vs. predictability for advanced use. > > Also, I think that we should decide whether or not it's OK for > > implementations to use different signal generation methods (e.g. trade > > quality for performance), or if all implementations must use a specific > > signal generation method. > > This is a more serious question. Do we mind if synths sound slightly > different? What about using an oscnode as an lfo, or an animation driver, > like Chris suggested? Definite values are more important then. I do want to separate the issues of signal generation method (e.g. anti-aliasing vs simple math) and the ducking/level issue. --- Reply to this email directly or view it on GitHub: https://github.com/WebAudio/web-audio-api/issues/104#issuecomment-24244275
Received on Wednesday, 11 September 2013 14:30:28 UTC