W3C home > Mailing lists > Public > public-audio@w3.org > July to September 2013

[Bug 17366] (OscillatorTypes): Oscillator types are not defined

From: <bugzilla@jessica.w3.org>
Date: Tue, 03 Sep 2013 21:08:51 +0000
To: public-audio@w3.org
Message-ID: <bug-17366-5429-oBPZGaxC7f@http.www.w3.org/Bugs/Public/>
https://www.w3.org/Bugs/Public/show_bug.cgi?id=17366

--- Comment #7 from Chris Wilson <cwilso@gmail.com> ---
(In reply to comment #6)
> Of course there need to ripples. But do we need to pre-duck the waveform to
> avoid clipping during naive playback, or can that be a problem for content
> authors? This API uses float samples, so there's no problem with excursions
> beyond 1.0 in the oscnode output; it can be adjusted later by a gain node,
> etc.

It certainly CAN be a problem for content authors; if they don't adjust with a
gain node at some point in the chain, they WILL get clipping distortion.  On
the plus side, a -1 to 1 oscillator is so relatively loud that I expect most
developers DO adjust with a gain node already.  :)

Much like de-zippering, it's really in how much we want to try to make the
default case sound good, vs. predictability for advanced use.

> > Also, I think that we should decide whether or not it's OK for
> > implementations to use different signal generation methods (e.g. trade
> > quality for performance), or if all implementations must use a specific
> > signal generation method.
> 
> This is a more serious question. Do we mind if synths sound slightly
> different? What about using an oscnode as an lfo, or an animation driver,
> like Chris suggested? Definite values are more important then.

I do want to separate the issues of signal generation method (e.g.
anti-aliasing vs simple math) and the ducking/level issue.

-- 
You are receiving this mail because:
You are on the CC list for the bug.
Received on Tuesday, 3 September 2013 21:08:53 UTC

This archive was generated by hypermail 2.4.0 : Friday, 17 January 2020 19:03:23 UTC