Re: Web Audio Processing: Use Cases and Requirements

Hello David,
You are quite correct in that the use cases document does not have many
scenarios for synthesis of audio; it does mention synthesis prominently,
but the only explicit synthesis use case mentioned is synthesizing a
metronome sound.  We should make that more explicit.

That said; the Web Audio API does have quite a powerful Oscillator node to
synthesize sounds, and many of the features of the API have been designed
to enable common synthesis needs.  In fact, I put together a synthesizer
that uses Web Audio to build a standard analog-era synthesizer -
http://webaudiodemos.appspot.com/midi-synth/index.html - with no sound
samples involved (actually, not entirely true - I do use an impulse
response sample for the reverb).  The AudioParam scheduling mechanism
enables a lot of powerful envelope controls.

Note, by the way, that Ray's blog post was talking specifically about HTML5
<audio> - that is, the HTML audio element - not the Web Audio API, which
was still very new when he wrote that post.  A number of the samples Chris
Rogers has written (at
http://chromium.googlecode.com/svn/trunk/samples/audio/) are sound
synthesis demos - e.g., all the oscillator-* files, wave-table-synth.html,
and tone-editor.

-Chris

On Sat, Sep 22, 2012 at 6:51 AM, David Dailey <ddailey@zoominternet.net>wrote:

> I am pleased to see the work on this topic  [1].****
>
> ** **
>
> The use cases *seem* to lack something that, in my mind, is rather
> fundamental: the ability to create sounds ex nihilo.  In the 1980’s Mac
> users had access to a pretty little program called SoundEdit [2]  that
> allowed one, using SVG-like shapes (though I don’t recall that we called it
> SVG back then) to create waveforms that were then converted to simple
> sounds. A sine wave of a particular frequency might correspond to a pure
> tone. Waveforms could be combined to create timbre, so that voices could be
> created. Throughout the document, I see lots of references to using
> pre-recorded sounds, stored as little “auditory bitmaps” somewhere, but
> nowhere that a composer could construct the primitive sounds herself.****
>
> ** **
>
> I think I might not be the only person interested in such.  Ray Cromwell’s
> blog [3], mentioned at [4], points out an inability of HTML5 audio: “you
> cannot synthesize sound on the fly.”****
>
> ** **
>
> Perhaps this is at the core of people’s thinking already and that it has,
> accordingly, been so obvious as to elude mention. Perhaps I’ve missed it in
> my perusal of the use cases (apologies, if so – it would not be the first
> time I’ve misread such things).  In my own shallow and brief
> experimentations with computer generated music over the past 4 decades, the
> generation of primitive sounds would seem to be important to the group’s
> efforts.****
>
> ** **
>
> I would suggest that something like InkML with SMIL and a <path>-like
> element that has PostScript-like loops, recursions, reversals,
> transpositions and the like would go a long way once the composer can
> create (or borrow) a set of notes and voices.****
>
> ** **
>
> Regards****
>
> David****
>
> ** **
>
> [1]
> https://dvcs.w3.org/hg/audio/raw-file/tip/reqs/Overview.html#music-creation-environment-with-sampled-instruments
> ****
>
> [2] http://en.wikipedia.org/wiki/SoundEdit ****
>
> [3]
> http://cromwellian.blogspot.com/2011/05/ive-been-having-twitter-back-and-forth.html
> ****
>
> [4] http://lists.w3.org/Archives/Public/public-audio/2011AprJun/0041.html
> ****
>

Received on Saturday, 22 September 2012 15:15:35 UTC