Music Synthesis

A couple observations on music synthesis: 

I understand that music synthesis is a part of this incubator's scope.  And that we are only just beginning the work, and that there may well be more proposals in the future.  That said, having looked at the Mozilla and Google proposals, I don't see direct support for music synthesis, as such, in either place.  

It seems in the Mozilla proposal a JavaScript/ECMAscript synthesis engine, music event system, and optional playback sequencer would be needed.  That would be a considerable amount of implementation in JavaScript/ECMAscript.

In the Google proposal I can see how node graphs could be used to construct arbitrarily complex/rewarding synth voices, with native efficiency, but again there's no music event system nor playback sequencer, so here too it seems a significant amount of JavaScript/ECMAscript implementation would be needed.

While JavaScript/ECMAscript is an extremely convenient execution environment in a browser, it's never previously been the go-to technology for music synthesis or event system implementations, due to its low efficiency compared with native implementations.  That's why historically standardized APIs for music and sound synthesis and processing have typically interfaced to native implementations (occasionally VM implementations ala Java), not interpreted implementations.  Consumer expectations for music synthesis tend to run to relatively high polyphony -- even mobile phones typically claim 48+ simultaneous voices (wavetable synthesis plus dynamic lowpass filter) -- and this would be difficult to achieve in any interpreted/scripting language, across a broad range of client device capabilities.

I hope we can try to keep these considerations in mind as our work proceeds.

	-- Chris G.

Received on Tuesday, 15 June 2010 18:47:14 UTC