W3C home > Mailing lists > Public > public-audio@w3.org > July to September 2013

Re: [web-audio-api] (OscillatorTypes): Oscillator types are not defined (#104)

From: Olivier Thereaux <notifications@github.com>
Date: Wed, 11 Sep 2013 07:29:36 -0700
To: WebAudio/web-audio-api <web-audio-api@noreply.github.com>
Message-ID: <WebAudio/web-audio-api/issues/104/24244250@github.com>
> [Original comment](https://www.w3.org/Bugs/Public/show_bug.cgi?id=17366#5) by Ralph Giles on W3C Bugzilla. Tue, 03 Sep 2013 21:03:14 GMT

(In reply to [comment #4](#issuecomment-24244238))
> Paul, I think that specifying the amplitude (i.e. time-domain signal) the
> way you suggest requires that an implementation does not deal with frequency
> folding.

Of course there need to ripples. But do we need to pre-duck the waveform to avoid clipping during naive playback, or can that be a problem for content authors? This API uses float samples, so there's no problem with excursions beyond 1.0 in the oscnode output; it can be adjusted later by a gain node, etc.

> Also, I think that we should decide whether or not it's OK for
> implementations to use different signal generation methods (e.g. trade
> quality for performance), or if all implementations must use a specific
> signal generation method.

This is a more serious question. Do we mind if synths sound slightly different? What about using an oscnode as an lfo, or an animation driver, like Chris suggested? Definite values are more important then.

Reply to this email directly or view it on GitHub:
Received on Wednesday, 11 September 2013 14:30:18 UTC

This archive was generated by hypermail 2.4.0 : Friday, 17 January 2020 19:03:24 UTC