- From: Chris Rogers <crogers@google.com>
- Date: Sun, 5 Aug 2012 13:53:18 -0700
- To: Srikumar Karaikudi Subramanian <srikumarks@gmail.com>
- Cc: lonce wyse <lonce.wyse@zwhome.org>, public-audio@w3.org
- Message-ID: <CA+EzO0mO3ZSO=6cFijQiJ23TT==ezNHPX+Lw73g=7R0CThFswg@mail.gmail.com>
On Sat, Aug 4, 2012 at 11:32 PM, Srikumar Karaikudi Subramanian < srikumarks@gmail.com> wrote: > Yes, I agree that it does seem inconsistent. However, the current "one > shot" design has some distinct advantages. > > Consider the alternatives. If multiple noteOn/noteOff sequences are > supported for such source nodes, then there is no obvious answer to how > sequences of noteOn/noteOff calls should be handled. For example what > should happen for the call sequence "noteOn(1), noteOff(3), noteOn(2)"? > Should the second noteOn override the noteOff? Should the noteOff be > advanced to happen before the second noteOn? Should the noteOn end up > cancelling the scheduled noteOff? etc. Suppose each noteOn call on such a > node is to start a new voice in parallel, then which of these voices > should a noteOff stop - the most recent one or all of them? No particular > choice here seems satisfactory or obviously useful to be the default. > > However, if an "instantiated voice" is reified as a node (as in this "one > shot node" design), we get explicit voice level control of the output audio > on top of which we can build MIDI-like management mechanisms such as voice > stealing or grouping voices into channels if we want. (If we keep > a reference around, we can access the state of these nodes and find out > that they have finished, since the objects won't be garbage collected.) > Yes, exactly, and explicit voice control is absolutely essential in order to create independent envelopes (amplitude, pitch, filter, etc.) for each voice. Because each voice has a one-to-one relationship with a node, we're in a great spot to do this by combining the nodes together to make the synth. If an AudioBufferSourceNode itself had "internal" multiple voices then we'd be in a very bad place. > > If we need to feed these voices into a complex signal processing graph > that we can't afford to create afresh per voice, then we can hold a > reference to the subgraph and send the outputs of the one-shot nodes to the > persistent subgraph. (It would then be important to make the overhead of > creating and destroying such one shot nodes as low as possible.) > > For the above reasons, I favour the "voice = one shot node" mapping. But > perhaps with better naming of these nodes and the noteOn/noteOff methods, > their behaviour can be made clearer? There have been some suggestions such > as renaming noteOn/Off to "start" and "stop". Maybe "finish" or "die" might > better indicate that "start" cannot be called again. "finish()" would also > be consistent with the node's "FINISHED_STATE" enum. Also "source node" is > an inadequate description, since we also have "MediaElementSourceNode" and > "MediaStreamSourceNode" which don't have this noteOn/Off behaviour. > > Best, > -Kumar > > On 5 Aug, 2012, at 12:41 PM, lonce wyse <lonce.wyse@zwhome.org> wrote: > > > Hello, > > From where I sit, the problem is a usage issue: > > Oscillator and AudioBufferSourceNode > objects can only be used once through noteOn/noteOff > > > because it seems inconsistent with the way other nodes can be used by > creating a graph architecture, and "using them more than once" if you keep > a reference to them around. > > - lonce > > > > On 5/8/2012 11:00 AM, Chris Rogers wrote: > > > > On Sat, Aug 4, 2012 at 7:49 PM, Srikumar Karaikudi Subramanian < > srikumarks@gmail.com> wrote: > >> Hi all, >> >> It appears that the Oscillator node's setup code for the >> "basic waveforms" is needlessly run for every node ("voice") >> instantiated. To avoid repeating the waveform setup code, >> can we perhaps delegate the task of creating the basic >> waveforms to the AudioContext object instead of the Oscillator >> node? i.e. AudioContext.createWaveTable(type) can be >> overloaded to return a wave table of the requested basic >> waveform which can then be assigned to any number of >> oscillator nodes. >> >> Thoughts? >> > > Hi Kumar, I see you must have been poking around in the WebKit source > code to see this :) > You've indeed found an inefficiency in the implementation, and might > consider filing a WebKit bug about this. But, it's an implementation > detail and can be optimized there, without needing to modify the API. > > Cheers, > Chris > > > > >
Received on Sunday, 5 August 2012 20:53:46 UTC