- From: Phil Burk <philburk@mobileer.com>
- Date: Wed, 08 Feb 2012 10:39:27 -0800
- To: public-audio@w3.org
Hello Robert, On 2/7/12 6:35 PM, Robert O'Callahan wrote: > Ultimately we're going to need more than one implementation of > (whatever the API is), probably one per browser engine. So it's not > just a matter of contributing code to "the Web Audio API". Sorry. I just joined on Monday so I am not familiar with your process. > I don't think having contributed C code in browsers for all the > effects people want is going to scale. I think it's important to have > the best possible support for JS-based audio generation and > processing. That probably means using Workers, as > ProcessedMediaStream does: > http://people.mozilla.org/~roc/stream-demos/worker-generation.html I that we have the option of using JavaScript to implement custom functions. That is really critical for experimentation or custom sound generation. But some unit generators are so commonly used that they can be considered the elements from which audio molecules are built. It would be nice to provide them in the API. > However, it's very important to distinguish the Web Audio API from > the Webkit implementation of that API. If the Web Audio API is to > become a W3C spec, it needs be implementable from the spec, without > borrowing or reverse engineering the code of Webkit or any other > implementation. I agree. We need to distinguish between specification and implementation. I believe that the oscillators can be defined precisely in terms of their spectrum and a time domain description. Wikipedia has a pretty good definition of a sawtooth: http://en.wikipedia.org/wiki/Sawtooth_wave Each oscillator will have a frequency control that can range from +/- Nyquist. An amplitude control is very handy but not strictly necessary. The noise generators can also be defined in terms of their spectrum. Phil Burk
Received on Wednesday, 8 February 2012 18:42:59 UTC