- From: Alessandro Saccoia <alessandro.saccoia@gmail.com>
- Date: Tue, 22 May 2012 10:16:26 +0200
- To: Chris Rogers <crogers@google.com>
- Cc: Chris Wilson <cwilso@google.com>, Marcus Geelnard <mage@opera.com>, public-audio@w3.org, Alistair MacDonald <al@signedon.com>
Hello Chris, > > but in other cases one could need a different type of filter. It would then be nice to expose the coefficients, so that the developers can twiddle with them without the need of recurring to custom filters implemented in Javascript Nodes. > > Yes, I agree! I just haven't yet put something like that in the spec because people are still trying to digest what's in there today. great! > > This makes also the programming interface a bit awkward. An example of this is in the Granular Effects example. > > I'm not sure which part you don't like. In that example we create a custom curve for the grain window and then simply call setValueCurveAtTime(). I think it's pretty concise (few lines of JS) and is flexible, allowing arbitrary curves. It is concise. I guess I think too much the c++ way… using loops or something along the lines of std::transform. Letting user code touch the internal representation of the AudioParams would introduce much more complexity, so... fine to me. > > - With the current specifications I don't think it could be possible to build a FM synthesizer, because there is no way to know if when setting the Oscillator frequency the phase will be reset or not. > > It's definitely possible. In fact, I've been playing around with some FM demos and getting good results. The demos are not quite ready to show, but hopefully I can show them quite soon. looking forward to it > > - native support of the FFT with an interface like the one in pure data or max msp would take minimal effort but give great joy to some people, and in this case a native implementation would have a big performance gain > > I'm open to suggestions if you can give some more detail. That could be done passing to the user defined event handler an object with an interface of type Event containing the results of the FFT, say in polar form so that the square root and atan operation can be done fastly by the implementation. Polar form is also the way most people would like it to be so that they can modify directly the magnitude and phase of the bins. > > By the way, I'm very supportive of having the WaveShaperNode optionally *internally* implement the non-linear distortion at higher-sample rates to avoid aliasing. I think this will be essential for high-quality distortion effects. yes, that's what I was thinking of. I guess that the specification could specify some desired properties of the rate conversion, for example the minimum rolloff at the nyquist frequency and the latency of the sampling rate conversion, or at least give the user code the possibility of knowing these properties if they are allowed to vary between the various implementations. Thank you, alessandro > > > Thanks for checking things out! > > Cheers, > Chris > > alessandro > > On May 21, 2012, at 7:40 PM, Chris Wilson wrote: > > > On Mon, May 21, 2012 at 4:44 AM, Marcus Geelnard <mage@opera.com> wrote: > > However, a large part of the API seems to come into good use mostly for "Musical Applications". > > > > I'm not sure I agree with this. I think you'll find that a lot of environmental effects are easily replicated ("voice on a telephone", etc) are easily done by using filters and convolution. It's certainly true that there are a few areas that this is the case - e.g. Oscillator and DynamicsCompressor - but a lot of the other effects can make immersive sound environments in games a lot easier to create. > > > > This leads me to believe that the JavaScript processing node will be very important (for implementing custom effects and instruments, and possibly even for creating effect libraries), while some native nodes (such as the Oscillator, BiquadFilterNode, DynamicsCompressorNode and DelayNode nodes) will not be used as much. > > > > Actually, I have to disagree there - I think you'll find a TREMENDOUS amount of the processing to create custom processing and effects can be easily built on top of the primitives in the Web Audio proposal today; I think filters, delays and convolution will remove a lot of the needs for JavaScript nodes. > > > > As an example - for Google IO, I'm building a demo that does vocoding using the Web Audio API. In order to implement this, I've had to use bandpass filters, lowpass filters, waveshapers, gain nodes in interesting ways, Oscillators (including a WaveTable oscillator, since those work incredibly well as carrier sources for vocoders). I went into the project fairly naively (I own a vocoder rack unit, I understand the basic concept, but I had a lot of learning to do). This has prompted me - and I am NOT an audio engineer, FWIW - to consider a bunch of the cases I find interesting to think through, like AutoTune-like effects and harmonizers, and distortion effects to build guitar amp simulators, etc. - even aside from my original passion, which is software synthesizers. I'm fairly convinced that most of the building blocks are there, and JavaScript nodes will be needed only in a few cases. > > > > That doesn't mean that JavaScriptNodes are not definitely a critical case, and should work well - and I think the first time I met Chris in person (we work in different offices) I asked if he would add Worker-based JavaScriptNode support. I really would not expect developers to use it first, though - it's a great tool, but it's not the best tool for the job a lot of the time. > > > > -Chris > > >
Received on Tuesday, 22 May 2012 08:17:23 UTC