W3C home > Mailing lists > Public > public-audio@w3.org > July to September 2013

Re: [web-audio-api] (setValueCurveAtTime): AudioParam.setValueCurveAtTime (#131)

From: Olivier Thereaux <notifications@github.com>
Date: Wed, 11 Sep 2013 07:29:53 -0700
To: WebAudio/web-audio-api <web-audio-api@noreply.github.com>
Message-ID: <WebAudio/web-audio-api/issues/131/24244482@github.com>
> [Original comment](https://www.w3.org/Bugs/Public/show_bug.cgi?id=17335#14) by Chris Rogers on W3C Bugzilla. Tue, 11 Dec 2012 21:21:20 GMT

(In reply to [comment #14](#issuecomment-24244478))
> Is there anything against the idea of having separate interpolator objects?
> I start to realize that a lot of the time you want to make a choice *if* and
> *how* you want to interpolate. So why not have interpolation as a separate
> module that you can plug in front of an AudioParam.
> Or is this a crazy idea? :)

I don't think that there's any simple way to generically abstract an interpolator object to work at the level of the modules in the Web Audio API.  Marcus has suggested an approach which is very much lower-level with his math library, but that's assuming a processing model which is very much different than the "fire and forget" model we have here.

Even if there were a way to simply create an interpolator object and somehow attach it to nodes (which I don't think there is), I think that for the 99.99% case developers don't want to have to worry about such low-level details for such things as "play sound now".  I've tried to design the AudioNodes such that they all have reasonable default behavior, trading off quality versus performance.  An attribute for interpolation quality seems like a simple way to extend the default behavior, without requiring developers to deal with interpolator objects all the time.

Reply to this email directly or view it on GitHub:
Received on Wednesday, 11 September 2013 14:30:46 UTC

This archive was generated by hypermail 2.4.0 : Friday, 17 January 2020 19:03:24 UTC