Serialization/introspection of the node graph

Greetings again all,

My background is less in Webby designs and more in dynamic generation of
audio for video games.  Most of my work has been at large video game
publishers in the US, and for the past 6 years I've run a small company
that makes audio middleware for game consoles.

If I were to embrace this specification for native apps, one thing I'd have
to design and implement is a serialization and/or introspection protocol
for the current state of the audio node graph, as well as a protocol for
abstracting clumps of nodes ("groups").  An audio designer working with the
Web Audio API objects presumably would want to work with the nodes in a GUI
tool, and then instance collections of one or more nodes as sounds are
played and stopped.

While all things are technically possible with JavaScript, the JS style
interface to Web Audio objects currently means that a knowledge of
JavaScript is a necessary barrier to entry for working with the WebAudio
objects.  This does not strike me as a necessary requirement for a sound
designer.  Additionally, standardization of serialization would permit
interoperability of designer tools that embraced the protocol.

A common mode of game development on embedded targets involves use of
designer tools on a PC and then serializing that environment to the
embedded target over TCP.

In short, while the Web Audio API is great for JavaScript programmers, and
the functionality I describe can certainly be implemented in JavaScript
above the API layer, -all- audio applications will eventually need or
desire the functionality I describe or some part of it, and therefore it
might be of use to consider standardizing the process for serializing
portions of and/or all the current Web Audio state.

Opinions are cheerfully requested.


John Byrd
Gigantic Software
2102 Business Center Drive
Suite 210-D
Irvine, CA   92612-1001
T: (949) 892-3526 F: (206) 309-0850

Received on Friday, 24 May 2013 17:49:15 UTC