W3C home > Mailing lists > Public > public-audio@w3.org > October to December 2012

Re: Hi all.

From: Chris Rogers <crogers@google.com>
Date: Mon, 10 Dec 2012 12:19:41 -0800
Message-ID: <CA+EzO0nsZo4CAJ2LEOaAaT5YSAbU1_Jo92PHw98YdmB3mxMVug@mail.gmail.com>
To: Marcus Geelnard <mage@opera.com>
Cc: redman <redman@redman.demon.nl>, Chris Wilson <cwilso@google.com>, "public-audio@w3.org" <public-audio@w3.org>
On Mon, Dec 10, 2012 at 3:53 AM, Marcus Geelnard <mage@opera.com> wrote:

> **
> Den 2012-11-21 14:26:03 skrev Chris Wilson <cwilso@google.com>:
>
>
> I ran into this recently too.  It's due to dezippering in .value.  The way
> to avoid it is instead of setting AudioParam.value, call
> AudioParam.setValueAtTime( value, audiocontext.currentTime ).
>  setValueAtTime is not dezippered.
>
>
> I fail to find any mention of de-zippering of the value attribute in the
> AudioParam spec.
>
> For the AudioGain node there is still the somewhat vague sentence "The
> implementation must make gain changes to the audio stream smoothly, without
> introducing noticeable clicks or glitches" (see bug 17339 ). It only
> applies to AudioGain though, and to me it sounds as if it should apply to
> all kind of events, including setValueAtTime.
>
> /Marcus
>

I need to add more detail on exactly how the de-zippering should work.  The
intention is that this smoothing should be applied when some kind of UI
element is modifying an AudioParam value (by setting the .value attribute
directly).  In the simplest case, this could be something like a volume
knob or slider being adjusted, where we certainly do not want to hear any
zippering artifacts.

The intention for the "automation" methods of AudioParam is that the
parameters values should be *exactly* as they are specified (with no
smoothing/de-zippering).  It's important to be able to specify exact curves
to be applied for envelopes, grain windows, etc.  But, the
setTargetAtTime() method provides a convenient way to achieve de-zippering
there, if desired.

Chris



>
>
>
>
> On Tue, Nov 20, 2012 at 12:29 PM, redman <redman@redman.demon.nl> wrote:
>
>> Hello everyone.
>>
>> I've recently become involved in a web audio API project (so this is
>> directed at the people involved in defining this).
>> It's a reasonably simple affair with a couple of synths and a
>> drumcomputer.
>> But i have run into some strange behaviour from a musical perspective.
>> The problem is that apparently when you use  .Value =   to set an
>> AudioParam value it is heavily (sloooowly) interpolated.
>> Now i understand why you do this but it doesn't make sense to do it all
>> the time and it certainly doesn't make sense to not be able to change the
>> speed of this interpolator.
>>
>> One of the problems that occur is when setting the frequency of an
>> oscillator.
>> What happens is that the frequency will change but it will not be
>> instantaneously, so the frequency slides from one value to the next.
>> And that is pretty useless as the change is way too slow for most
>> situations.
>> What does work is just scheduling the change, but this is not always
>> wanted and seems counter-intuitive. When i set a value i expect it to just
>> change.
>> What would be useful is some way to either disable this behavior or to be
>> able to control the speed of interpolation.
>> In the case of oscillator frequency it actually represents an optional
>> function usually called Portamento.
>> And of course in real synths this function can be switched off or tweaked
>> to fit the sound designers intention.
>> But in the implementation (chrome) that i use it is neither optional nor
>> configurable which is a bad thing and actually unwanted most of the time.
>> If you think it's great because getting rid of clicks gave you a free
>> portamento think again! :)
>> It is just not always appropriate to have this behaviour and fixing it
>> globally is like looking at the world through a hammer. Everything becomes
>> a nail.
>>
>> So i hope you can fix that in the future. Not all modulations are meant
>> to be used with smooth continuous signals!
>> Preferably there would be separate functions to have the input either
>> interpolated or not. That way you can simultaneously input values that you
>> want to be interpolated and values that you do not want to become
>> interpolated and they would be mixed together down the path before feeding
>> the actual audio algorithm.
>> Also, there may be a future need to decide the actual interpolation algo
>> because not all types of interpolation work equally well on all parameter
>> types.
>> But first things first :) make it so that it doesn't stand in the way of
>> building great synths.
>>
>> greets,
>> aka.
>>
>>
>>
>>
>
>
>
> --
> Marcus Geelnard
> Core graphics developer
> Opera Software ASA
>
Received on Monday, 10 December 2012 20:20:13 UTC

This archive was generated by hypermail 2.4.0 : Friday, 17 January 2020 19:03:14 UTC