- From: Chris Rogers <crogers@google.com>
- Date: Sun, 5 Aug 2012 13:22:09 -0700
- To: Srikumar Karaikudi Subramanian <srikumarks@gmail.com>
- Cc: lonce wyse <lonce.wyse@zwhome.org>, public-audio@w3.org
- Message-ID: <CA+EzO0knzCgwngzdWCXsM7r2ANX9qkRNXzcQRKLYhRETWxy0ZQ@mail.gmail.com>
On Sat, Aug 4, 2012 at 11:32 PM, Srikumar Karaikudi Subramanian < srikumarks@gmail.com> wrote: > Yes, I agree that it does seem inconsistent. However, the current "one > shot" design has some distinct advantages. > > Consider the alternatives. If multiple noteOn/noteOff sequences are > supported for such source nodes, then there is no obvious answer to how > sequences of noteOn/noteOff calls should be handled. For example what > should happen for the call sequence "noteOn(1), noteOff(3), noteOn(2)"? > Should the second noteOn override the noteOff? Should the noteOff be > advanced to happen before the second noteOn? Should the noteOn end up > cancelling the scheduled noteOff? etc. Suppose each noteOn call on such a > node is to start a new voice in parallel, then which of these voices > should a noteOff stop - the most recent one or all of them? No particular > choice here seems satisfactory or obviously useful to be the default. > Kumar thanks for explaining it like this. Yes, in fact the AudioBufferSourceNode represents a single "instance" or voice of sound playback, and each additional voice can be created with a new AudioBufferSourceNode. Because I knew that many people would be using this API for synthesizer applications, and even more commonly for "play sound now" applications, I wanted to make sure that we handle the case where somebody triggers the same sound several times quickly in succession. You don't want to be in a situation where you're re-triggering the same sound to play back a second time, but the first sound hasn't yet finished playing yet and gets brutally stopped in order to re-trigger playback. Instead, you want to have to first sound continue to play back while a new voice starts the 2nd sound. The AudioBufferSourceNode represents the object for the voice. I came to this design after having developed several sample-playback synths, game playback engines, and analog modeling synth engines in C++ where the structure of voices always turned out to boil down to this at the low-level, whether it was used for music or for one-shot playback of sounds in a game. Consider it a low-level building block, which most of the time is also the one you want to use directly for simple sound playback. For other types of cases like a MIDI channel, where you can send as many noteOn or noteOff commands to a particular MIDI channel as you wish. It's very easy to write a simple JavaScript wrapper called MIDIChannel (for example) which would allocate the voices as necessary to behave like this, just a few lines of code really. > > However, if an "instantiated voice" is reified as a node (as in this "one > shot node" design), we get explicit voice level control of the output audio > on top of which we can build MIDI-like management mechanisms such as voice > stealing or grouping voices into channels if we want. (If we keep > a reference around, we can access the state of these nodes and find out > that they have finished, since the objects won't be garbage collected.) > > If we need to feed these voices into a complex signal processing graph > that we can't afford to create afresh per voice, then we can hold a > reference to the subgraph and send the outputs of the one-shot nodes to the > persistent subgraph. (It would then be important to make the overhead of > creating and destroying such one shot nodes as low as possible.) > > For the above reasons, I favour the "voice = one shot node" mapping. But > perhaps with better naming of these nodes and the noteOn/noteOff methods, > their behaviour can be made clearer? There have been some suggestions such > as renaming noteOn/Off to "start" and "stop". Maybe "finish" or "die" might > better indicate that "start" cannot be called again. "finish()" would also > be consistent with the node's "FINISHED_STATE" enum. Also "source node" is > an inadequate description, since we also have "MediaElementSourceNode" and > "MediaStreamSourceNode" which don't have this noteOn/Off behaviour. > > Best, > -Kumar > > On 5 Aug, 2012, at 12:41 PM, lonce wyse <lonce.wyse@zwhome.org> wrote: > > > Hello, > > From where I sit, the problem is a usage issue: > > Oscillator and AudioBufferSourceNode > objects can only be used once through noteOn/noteOff > > > because it seems inconsistent with the way other nodes can be used by > creating a graph architecture, and "using them more than once" if you keep > a reference to them around. > > - lonce > > > > On 5/8/2012 11:00 AM, Chris Rogers wrote: > > > > On Sat, Aug 4, 2012 at 7:49 PM, Srikumar Karaikudi Subramanian < > srikumarks@gmail.com> wrote: > >> Hi all, >> >> It appears that the Oscillator node's setup code for the >> "basic waveforms" is needlessly run for every node ("voice") >> instantiated. To avoid repeating the waveform setup code, >> can we perhaps delegate the task of creating the basic >> waveforms to the AudioContext object instead of the Oscillator >> node? i.e. AudioContext.createWaveTable(type) can be >> overloaded to return a wave table of the requested basic >> waveform which can then be assigned to any number of >> oscillator nodes. >> >> Thoughts? >> > > Hi Kumar, I see you must have been poking around in the WebKit source > code to see this :) > You've indeed found an inefficiency in the implementation, and might > consider filing a WebKit bug about this. But, it's an implementation > detail and can be optimized there, without needing to modify the API. > > Cheers, > Chris > > > > >
Received on Sunday, 5 August 2012 20:22:37 UTC