W3C home > Mailing lists > Public > public-audio@w3.org > April to June 2013

Re: Fwd: Serialization/introspection of the node graph

From: Marcus Geelnard <mage@opera.com>
Date: Mon, 27 May 2013 09:47:13 +0200
To: "public-audio@w3.org" <public-audio@w3.org>, "John Byrd" <jbyrd@giganticsoftware.com>
Message-ID: <op.wxqfwzdim77heq@mage-speeddemon>
Hi John!

I see your point. However, I think that one of the goals of the Web Audio  
API is to be minimal, for good reasons. For a Web standard, less is more.  
IMO, the most important role of a Web standard is to act as a technology  
enabler. Higher level solutions are much better implemented as JS  
libraries built on top of Web APIs.

Keep in mind that many different browser vendors need to implement the  
same functionality, with the exact same behavior and similar performance,  
and that the technology will be rolled out over a fairly long period of  
time (years). The more complex the API specification, the more likely that  
the roll out process will be longer.

An example of a Web API that is not very simple to interact with is WebGL.  
You have to learn and implement tons of stuff just to get an animated 3D  
model up and running. This is where higher level frameworks, such as  
Three.js, come to the rescue.

I expect there to be similar higher level audio libraries built on top of  
the Web Audio API, and functionality such as sub-graph instancing sounds  
like a good fit for such a library.

Like a Web standard, a JS lib is available to everyone, and (if well  
designed) has the exact same behavior on all browsers. The major  
difference, however, is that a JS lib can be made available instantly  
(e.g. through github), whereas a Web standard can take years before it's  
100% deployed (in the case of WebGL - we're getting close to a decade now).

/Marcus


Den 2013-05-25 21:04:11 skrev John Byrd <jbyrd@giganticsoftware.com>:

> What you say may well be so, but I don't necessarily see how that must  
> be the case, nor how this idea conflicts with the current >garbage  
> collection model.  Certainly, at the time of serializing a subset of  
> nodes, those nodes either exist or they don't. > Serializing a portion  
> of the node graph would necessarily be an atomic operation, and you'd  
> have the JS references to the objects >immediately after they were  
> instanced.
>
> If you want a use case, imagine that we've created a really nice vocal  
> warming effect with a biquad filter, a de-esser and a >compander.  We'd  
> then like to instance this group n times for use on n channels of audio  
> input.  For this use, we'd like to >represent this subgraph as a single  
> large node, with exactly one input and one output.  Instancing this  
> subgraph n times helps us >solve our problem.  
> Another extremely common use case: I want a generic audio effect that  
> incorporates a bounded, random amount of pitch shift and >gain, and I  
> want to apply it to all the footsteps in my game.  An audio designer  
> would prefer to have all these effects wrapped >into a new effect node  
> which can be easily instanced, as opposed to instancing all these  
> objects individually in JavaScript.
>
> This instancing might be done in several ways.  Serialization and  
> deserialization of an existing subgraph is one way to accomplish >this.   
> There are other ways to accomplish as well, including serialization of  
> the Web Audio API itself, so that command sequences >could be recorded  
> and replayed.
>
> jwb
>
>
> On Sat, May 25, 2013 at 12:21 AM, Robert O'Callahan  
> <robert@ocallahan.org> wrote:
>> On Sat, May 25, 2013 at 1:48 AM, John Byrd <jbyrd@giganticsoftware.com>  
>> wrote:
>>> In short, while the Web Audio API is great for JavaScript programmers,  
>>> and the functionality I describe can certainly be >>>implemented in  
>>> JavaScript above the API layer, -all- audio applications will  
>>> eventually need or desire the functionality I >>>describe or some part  
>>> of it, and therefore it might be of use to consider standardizing the  
>>> process for serializing portions of >>>and/or all the current Web  
>>> Audio state.
>>
>> Adding methods to AudioNodes to let you traverse the node graph would  
>> actually prevent us from doing some very important >>optimizations  
>> (i.e., silently deleting nodes that have completely finished playing  
>> and are not otherwise referenced). So we >>shouldn't do that.
>>
>> A Web app can explicitly keep track of the structure of the graph it  
>> has constructed (although they're likely to defeat the above  
>> >>optimizations by doing so). If you want to write a spec for how that  
>> is done, and/or write a spec for (de)serializing a node >>graph --- and  
>> implement a library that follows that spec --- go ahead. That can be  
>> completely separate from the Web Audio API >>spec itself, as long as it  
>> doesn't require changes to browsers.
>>
>> Rob
>> --q“qIqfq qyqoquq qlqoqvqeq qtqhqoqsqeq qwqhqoq qlqoqvqeq qyqoquq,q  
>> qwqhqaqtq qcqrqeqdqiqtq qiqsq qtqhqaqtq qtqoq qyqoquq?q >>qEqvqeqnq  
>> qsqiqnqnqeqrqsq qlqoqvqeq qtqhqoqsqeq qwqhqoq qlqoqvqeq qtqhqeqmq.q  
>> qAqnqdq qiqfq qyqoquq qdqoq qgqoqoqdq qtqoq >>qtqhqoqsqeq qwqhqoq  
>> qaqrqeq qgqoqoqdq qtqoq qyqoquq,q qwqhqaqtq qcqrqeqdqiqtq qiqsq  
>> qtqhqaqtq qtqoq qyqoquq?q qEqvqeqnq >>qsqiqnqnqeqrqsq qdqoq qtqhqaqtq.q"
>>>
>
>
>> --
> ---
>
> John Byrd
> Gigantic Software
> 2102 Business Center Drive
> Suite 210-D
> Irvine, CA   92612-1001http://www.giganticsoftware.com
> T: (949) 892-3526 F: (206) 309-0850
>
>
> -----
>
> John Byrd
> Gigantic Software
> 2102 Business Center Drive
> Suite 210-D
> Irvine, CA   92612-1001http://www.giganticsoftware.com
> T: (949) 892-3526 F: (206) 309-0850



-- 
Använder Operas banbrytande e-postklient: http://www.opera.com/mail/
Received on Monday, 27 May 2013 07:47:49 UTC

This archive was generated by hypermail 2.4.0 : Friday, 17 January 2020 19:03:18 UTC