Re: noteOn/noteOff and node lifetimes ...

On Mon, Aug 6, 2012 at 10:40 AM, Jussi Kalliokoski <
jussi.kalliokoski@gmail.com> wrote:

>
>
> On Mon, Aug 6, 2012 at 8:02 PM, Chris Rogers <crogers@google.com> wrote:
>
>>
>>
>> On Sun, Aug 5, 2012 at 11:28 PM, lonce wyse <lonce.wyse@zwhome.org>wrote:
>>
>>>
>>> Hello -
>>>
>>>     While I see the issues with having to manage voices (eg overlapping
>>> note-ons, sending different parameters to different notes), I don't see why
>>> a good solution is to have  AudioBufferSourceNode behave in a manner
>>> inconsistent with all other Nodes. Not being able to keep references and
>>> node connections around for reuse for one type of node but not others is
>>> confusing and makes code hard to manage and read.
>>>
>>
>> I don't think it's that hard to understand the difference between the
>> concept of dynamic sounds with a limited life-time versus long-lived
>> sources such as live-audio and streamed music sources.  The idea that some
>> nodes like Oscillator and AudioBufferSourceNode will automatically
>> disconnect themselves from the graph when they are finished is a powerful
>> feature because it gives automatic (for free) dynamic voice allocation,
>> allowing the dynamic sources to "just work" without a lot of messing around
>> with manual management that would otherwise be necessary.  And I also don't
>> think that it's quite true that you can't keep references and node
>> connections around for re-use.  For example, you can use a single
>> Oscillator in a long-lived use case, for example in a monosynth application
>> where you have total control of its lifetime (by never calling noteOff())
>> and controlling the amplitude and filter envelopes to create the notes.
>>
>>
>>>     There are other solutions - in a synthesis system I once wrote (not
>>> unlike webaudio), note on/offs and parameters were given unique IDs so that
>>> they could be properly matched (there was even a special "all" ID so that
>>> one could, for example, bend the pitch of all notes with one call).
>>>
>>
>> Yes, but in the current solution we already have this notion of ID, where
>> the ID is an instance of an AudioBufferSourceNode or Oscillator.  This is
>> much simpler than having to invent yet another concept.
>>
>>
>>
>>>
>>>     Obviously, you don't want to have your average game developer
>>> keeping track of IDs for notes. But this issue highlights what I see as a
>>> critical threat to webaudio - that it falling into the classic HCI trap of
>>> trying to meet the needs of at least two very different groups - sound
>>> developers and sound users. Sound developers should be trusted to create
>>> the sounds and systems with user-targeted interfaces as fast as webaudio
>>> provides developer-strength audio components and developer-oriented APIs!
>>>
>>
>> I'm not sure that I agree that the groups will always be so different as
>> you think.  There are many game developers who are very interested in music
>> and synthesis, and vice versa.  Hybrid applications involving some aspects
>> of both is a very rich area in my opinion.
>>
>> I'm really trying to understand exactly what types of applications are
>> being made more difficult by the current design as compared with any other.
>>  You mentioned IDs, etc. but we effectively have that already.
>>
>
> One example is a graphical layer on top of the API. You make an
> AudioBufferSourceNode or an Oscillator, add it to the graph and when it's
> finished playing, it does what? Disappears, and you have to make a new one?
>

Sure, I understand.  But this is such a non-issue because it's absolutely
trivial to create a wrapper object which represents something persistent
which can be fired many times.

We've written tutorials to explain to people how to abstract things like
this:
http://www.html5rocks.com/en/tutorials/webaudio/intro/

See the "playSound()" function:

function playSound(buffer) {
  var source = context.createBufferSource(); // creates a sound source
  source.buffer = buffer;                    // tell the source which sound
to play
  source.connect(context.destination);       // connect the source to the
context's destination (the speakers)
  source.noteOn(0);                          // play the source now
}

The Web Audio API does *not* imply any sort of graphical representation in
a UI, despite having many diagrams in the spec to explain the concepts.
 The UI-layer or application metaphor that is expressed to the user of the
application can appear in many guises.  And there are a great number of
interesting and clever ways that developers/graphic-designers can devise to
graphically represent audio processing / synthesis in an audio application.
 The API is not designed to represent a one-to-one mapping with your UI in
your example, although the desired behavior can be trivially implemented
with a few lines of code.

The API is designed to allow the exposure of the concept of dynamic voice,
each of whose DSP processing can be independently and flexibly controlled.
 Whether your metaphor is something like MIDI with channels, MPEG-4
SAOL/SASL with IDs, or implementing a synthesizer like the venerable
Kurzweil K2000, a monophonic analog modeling synth, or a simple SoundFont
synth, the building blocks are there.

Chris



>
>
>>
>> Chris
>>
>>
>>>
>>> Best,
>>>              - lonce
>>>
>>>
>>>
>>> On 8/6/2012 4:53 AM, Chris Rogers wrote:
>>>
>>>
>>>
>>> On Sat, Aug 4, 2012 at 11:32 PM, Srikumar Karaikudi Subramanian <
>>> srikumarks@gmail.com> wrote:
>>>
>>>> Yes, I agree that it does seem inconsistent. However, the current "one
>>>> shot" design has some distinct advantages.
>>>>
>>>>  Consider the alternatives. If multiple noteOn/noteOff sequences are
>>>> supported for such source nodes, then there is no obvious answer to how
>>>> sequences of noteOn/noteOff calls should be handled. For example what
>>>> should happen for the call sequence "noteOn(1), noteOff(3), noteOn(2)"?
>>>> Should the second noteOn override the noteOff? Should the noteOff be
>>>> advanced to happen before the second noteOn? Should the noteOn end up
>>>> cancelling the scheduled noteOff? etc.  Suppose each noteOn call on such a
>>>> node is to start a new voice in parallel,  then which of these voices
>>>> should a noteOff stop - the most recent one or all of them? No particular
>>>> choice here seems satisfactory or obviously useful to be the default.
>>>>
>>>>  However, if an "instantiated voice" is reified as a node (as in this
>>>> "one shot node" design), we get explicit voice level control of the output
>>>> audio on top of which we can build MIDI-like management mechanisms such as
>>>> voice stealing or grouping voices into channels if we want. (If we keep
>>>> a reference around, we can access the state of these nodes and find out
>>>> that they have finished, since the objects won't be garbage collected.)
>>>>
>>>
>>>  Yes, exactly, and explicit voice control is absolutely essential in
>>> order to create independent envelopes (amplitude, pitch, filter, etc.) for
>>> each voice.  Because each voice has a one-to-one relationship with a node,
>>> we're in a great spot to do this by combining the nodes together to make
>>> the synth.  If an AudioBufferSourceNode itself had "internal" multiple
>>> voices then we'd be in a very bad place.
>>>
>>>
>>>>
>>>>  If we need to feed these voices into a complex signal processing
>>>> graph that we can't afford to create afresh per voice, then we can hold a
>>>> reference to the subgraph and send the outputs of the one-shot nodes to the
>>>> persistent subgraph. (It would then be important to make the overhead of
>>>> creating and destroying such one shot nodes as low as possible.)
>>>>
>>>>  For the above reasons, I favour the "voice = one shot node" mapping.
>>>> But perhaps with better naming of these nodes and the noteOn/noteOff
>>>> methods, their behaviour can be made clearer? There have been some
>>>> suggestions such as renaming noteOn/Off to "start" and "stop". Maybe
>>>> "finish" or "die" might better indicate that "start" cannot be called
>>>> again. "finish()" would also be consistent with the node's "FINISHED_STATE"
>>>> enum. Also "source node" is an inadequate description, since we also have
>>>> "MediaElementSourceNode" and "MediaStreamSourceNode" which don't have this
>>>> noteOn/Off behaviour.
>>>>
>>>>  Best,
>>>>    -Kumar
>>>>
>>>>  On 5 Aug, 2012, at 12:41 PM, lonce wyse <lonce.wyse@zwhome.org> wrote:
>>>>
>>>>
>>>> Hello,
>>>>
>>>> From where I sit, the problem is a usage issue:
>>>>
>>>>  Oscillator and AudioBufferSourceNode
>>>> objects can only be used once through noteOn/noteOff
>>>>
>>>>
>>>> because it seems inconsistent with the way other nodes can be used by
>>>> creating a graph architecture, and "using them more than once" if you keep
>>>> a reference to them around.
>>>>
>>>> - lonce
>>>>
>>>>
>>>>
>>>> On 5/8/2012 11:00 AM, Chris Rogers wrote:
>>>>
>>>>
>>>>
>>>> On Sat, Aug 4, 2012 at 7:49 PM, Srikumar Karaikudi Subramanian <
>>>> srikumarks@gmail.com> wrote:
>>>>
>>>>> Hi all,
>>>>>
>>>>> It appears that the Oscillator node's setup code for the
>>>>> "basic waveforms" is needlessly run for every node ("voice")
>>>>> instantiated. To avoid repeating the waveform setup code,
>>>>> can we perhaps delegate the task of creating the basic
>>>>> waveforms to the AudioContext object instead of the Oscillator
>>>>> node? i.e. AudioContext.createWaveTable(type) can be
>>>>> overloaded to return a wave table of the requested basic
>>>>> waveform which can then be assigned to any number of
>>>>> oscillator nodes.
>>>>>
>>>>> Thoughts?
>>>>>
>>>>
>>>>  Hi Kumar, I see you must have been poking around in the WebKit source
>>>> code to see this :)
>>>> You've indeed found an inefficiency in the implementation, and might
>>>> consider filing a WebKit bug about this.  But, it's an implementation
>>>> detail and can be optimized there, without needing to modify the API.
>>>>
>>>>  Cheers,
>>>> Chris
>>>>
>>>>
>>>>
>>>>
>>>>
>>>
>>>
>>>
>>
>

Received on Monday, 6 August 2012 18:36:22 UTC