- From: Chris Rogers <crogers@google.com>
- Date: Mon, 6 Aug 2012 11:43:56 -0700
- To: Jussi Kalliokoski <jussi.kalliokoski@gmail.com>
- Cc: Chris Wilson <cwilso@google.com>, Peter van der Noord <peterdunord@gmail.com>, public-audio@w3.org
- Message-ID: <CA+EzO0=j7kk6wKzgijaZDn=nq_g6ZDx-XkWhRAdJVAxNAyiz5Q@mail.gmail.com>
On Mon, Aug 6, 2012 at 11:06 AM, Jussi Kalliokoski < jussi.kalliokoski@gmail.com> wrote: > On Mon, Aug 6, 2012 at 8:09 PM, Chris Wilson <cwilso@google.com> wrote: > >> On Sat, Aug 4, 2012 at 7:05 AM, Peter van der Noord < >> peterdunord@gmail.com> wrote: >> >>> 2012/8/3 Chris Wilson <cwilso@google.com> >>> >>>> On Fri, Aug 3, 2012 at 10:40 AM, Peter van der Noord < >>>> peterdunord@gmail.com> wrote: >>>> >>>>> I agree fully that it won't be what most developers want or need to >>>>> do, the api will be used for games and site music/effects mostly, but >>>>> creating custom nodes would be my primary focus. To be honest, the list of >>>>> native nodes that i wanted to use has thinned out, due to some behaviours >>>>> and implementations that were not appropriate for what i wanted. That's all >>>>> fine by itself, but if i can't recreate them myself... >>>> >>>> >>>> Have you filed issues on those behaviors and implementations? >>>> >>> >>> I have posted quite a lot of messages to this list as i went along, >>> commenting on things that i found when i was working with the api. Some >>> suggestions made sense, some didn't, but to be honest this isn't really the >>> point. Apart from the fact that i don't want the workgroup to decide >>> whether my ideas are valid, and if they are, wait for a year for it to be >>> implemented in a browser (and probably not exactly as i'd want it so i'd >>> have to work around it), the main point is: i want to create my own nodes. >>> >>> You mention somewhere in this thread that nearly everything can be done >>> with the native nodes, but just this isnt true. Really, it isn't. For >>> example: there are literally hundreds of modules on sale for the Eurorack >>> system (the most popular type of modular synthesizer, i did a really quick >>> count on a popular store and saw about 500-600), with new ones being added >>> every year. With your argument, you'd only need 8 types or so and would be >>> able to do everything with those. Obviously, that is not the case. >>> >> >> There are a limited number of electric components those modules are made >> of (resistors, capacitors, etc), so I have to disagree. AudioNodes are not >> a 1:1 comparison to modular synth modules - most modular synth modules >> would combine a number of different AudioNode components, in my >> expectation. I still believe that scripted nodes are important to get >> right as well, but I believe most scenarios actually CAN be satisfied with >> a combination of native modules. >> >> You mentioned the only thing you'd want would be an noise gate (iirc), a >>> nice one indeed, but what's the use if it adds a delay to the signal? >>> Correct me if i'm wrong, but in my opinion (or more exact: in my case) it's >>> useless. >>> >> >> Given that noise gate would be a trivial node to implement, I'd be able >> to get away with implementing it with a very small bufferSize, which would >> put its delay on the order of 5-10ms. That's still unfortunate, and I'd >> much rather have it as a native module, yes. I'd also rather have an >> AudioNode output of the "reduction" parameter in the DynamicsCompressor to >> do side-chaining and ducking; but I see these as evolutionary things that >> we're working toward, not a reason to start over and expect that everyone >> wants to be calling matrix methods directly. >> >> I know custom nodes are way slower than the native ones and i accept >>>> that. For comparison, there's nothing native going on in the modular synth >>>> i created in flash, although i do admit there is some trickery involved. >>>> But still, it's the single-threaded flash player, with each module writing >>>> buffers in plain old as3 for-loops. I have no idea how as3 compares to js >>>> (if anyone knows, i'd be interested to hear), but if i can recreate that >>>> same performance with js and custom-nodes, i'm quite happy. (The >>>> openingpatch on patchwork-synth.com runs 20-30 modules or so, and most >>>> of them are not very optimized at all; my sine-osc still does Math.sin to >>>> create its signal and there's the same biquad filter in there coded by hand >>>> in as3, and still the ui reacts fine) >>>> >>> >> And if that's how you want to implement, simply use a single JS node >> connected for output, and continue to do everything else yourself. A >> single delay of 5-10ms shouldn't be a critical issue. It seems to me that >> you've already DONE the work to implement a lot of audio processing >> yourself; one major goal behind Web Audio was to make that kind of >> processing accessible to those who don't already have that code written, >> and/or would benefit from the performance boost from having >> natively-implemented optimized audio code running in a >> separate-from-the-UI-thread high-priority thread. I don't want to >> understate the important of optimizing and insulating the audio path from >> glitching (for example, resizing the window while your opening patch was >> playing badly glitches on my >> Xeon MacPro). >> >> I guess what I'm trying to get at is there's a huge difference between "I >> want to create my own programmatic modules" and "I want to create my own >> nodes." The vocoder, for example, is essentially a bunch of programmatic >> modules plugged together; however, it doesn't use JSNodes at all. >> > > Funny that you should mention these things: > * Ease of use of the API. > * Performance benefits. > * The possibility of creating almost any system with native nodes. > > To me, the last point counteracts both of the former ones. Essentially, > what you're suggesting is to make software developers think of their audio > systems in terms of electronics and that everything can be made out of > these components. While true, this is software and the API is going to be > used by software developers. That means that making them think in terms of > electronics rather than software, there's hardly any point to be made for > ease of use. Not to mention performance. > > Your vocoder is a good example, actually. Don't take me wrong, it's a > really cool demo. But if you compare the complexity of implementing it with > a JavaScriptNode and DSP API, the difference is astonishing. > > In mathematical terms you could define a vocoder as `output = > IFFT(FFT(window(input)) * FFT(window(carrier)))` and an implementation > would be a few lines of code, whereas your vocoder is a few hundred! And > that's even before thinking about performance or accuracy. I'm pretty > certain an implementation even in pure JavaScript (without the DSP API) > would outperform the setup, and even exponentially when you increase the > number of frequency bands used. > > Cheers, > Jussi > Jussi, old-school vocoders use more of a constant-Q approach: https://ccrma.stanford.edu/~Jos/sasp/Audio_Filter_Banks.html So FFTs are less useful there... Chris
Received on Monday, 6 August 2012 18:44:25 UTC