W3C home > Mailing lists > Public > public-audio@w3.org > July to September 2012

Re: Web Audio Processing: Use Cases and Requirements

From: Joseph Berkovitz <joe@noteflight.com>
Date: Sat, 22 Sep 2012 12:13:22 -0400
Cc: <public-audio@w3.org>
Message-Id: <E02AA206-6032-4F0B-9E36-A5DD56FA887D@noteflight.com>
To: "David Dailey" <ddailey@zoominternet.net>
Hi David,

I agree that this use case is missing, and that it's important. I will work on adding this concept back into use cases.

Having said that, let me also take pains to say that Web Audio is indeed capable of synthesizing pure sampled data ex nihilo. This is a fundamental capability of the API. The only reason it was left out of the use cases was, as you said, it seemed obvious. But use cases should state the obvious.



On Sep 22, 2012, at 9:51 AM, "David Dailey" <ddailey@zoominternet.net> wrote:

> I am pleased to see the work on this topic  [1].
> The use cases seem to lack something that, in my mind, is rather fundamental: the ability to create sounds ex nihilo.  In the 1980’s Mac users had access to a pretty little program called SoundEdit [2]  that allowed one, using SVG-like shapes (though I don’t recall that we called it SVG back then) to create waveforms that were then converted to simple sounds. A sine wave of a particular frequency might correspond to a pure tone. Waveforms could be combined to create timbre, so that voices could be created. Throughout the document, I see lots of references to using pre-recorded sounds, stored as little “auditory bitmaps” somewhere, but nowhere that a composer could construct the primitive sounds herself.
> I think I might not be the only person interested in such.  Ray Cromwell’s blog [3], mentioned at [4], points out an inability of HTML5 audio: “you cannot synthesize sound on the fly.”
> Perhaps this is at the core of people’s thinking already and that it has, accordingly, been so obvious as to elude mention. Perhaps I’ve missed it in my perusal of the use cases (apologies, if so – it would not be the first time I’ve misread such things).  In my own shallow and brief experimentations with computer generated music over the past 4 decades, the generation of primitive sounds would seem to be important to the group’s efforts.
> I would suggest that something like InkML with SMIL and a <path>-like element that has PostScript-like loops, recursions, reversals, transpositions and the like would go a long way once the composer can create (or borrow) a set of notes and voices.
> Regards
> David
> [1]  https://dvcs.w3.org/hg/audio/raw-file/tip/reqs/Overview.html#music-creation-environment-with-sampled-instruments
> [2] http://en.wikipedia.org/wiki/SoundEdit
> [3] http://cromwellian.blogspot.com/2011/05/ive-been-having-twitter-back-and-forth.html
> [4] http://lists.w3.org/Archives/Public/public-audio/2011AprJun/0041.html

... .  .    .       Joe

Joe Berkovitz

Noteflight LLC
Boston, Mass.
phone: +1 978 314 6271

Received on Saturday, 22 September 2012 16:13:50 UTC

This archive was generated by hypermail 2.4.0 : Friday, 17 January 2020 19:03:11 UTC