[whatwg] Audio canvas?

ddailey wrote:
> I recall a little app called soundEdit (I think) that ran in the Mac 
> back in the mid 1980's. I think it was shareware (at least it was 
> ubiquitous).
> 
> The editing primitives were fairly cleanly defined and, had a reasonable 
> metaphoric correspondence to the familiar drawing actions.
> 
> There was a thing where you could grab a few seconds of sound and copy 
> it and paste it; you could drag and drop; you could invert (by just 
> subtracting each of the tones from a ceiling) you could reverse (by 
> inverting the time axis). You could even go in with your mouse and drag 
> formants around. It was pretty cool.
> 
> It would not be a major task for someone to standardize such an 
> interface and I believe any patents would be expired by now.

No need to go to particular _applications_ for inspirations when 
libraries developed with some generality in mind (e.g. 
http://www.speech.kth.se/snack/man/snack2.2/tcl-man.html) can serve as 
inspiration already. A carefully chosen subset of Snack might be a good 
start.

> David
> ----- Original Message ----- From: "Dave Singer" <singer at apple.com>
> To: <whatwg at lists.whatwg.org>
> Sent: Wednesday, July 16, 2008 2:25 PM
> Subject: Re: [whatwg] Audio canvas?
> 
> 
>> At 20:18  +0200 16/07/08, Dr. Markus Walther wrote:
>>>
>>> get/setSample(<samplePoint> t, <sampleValue> v, <channel> c).
>>>
>>> For the sketched use case - in-browser audio editor -, functions on 
>>> sample regions from {cut/add silence/amplify/fade} would be nice and 
>>> were mentioned as an extended possibility, but that is optional.
>>>
>>> I don't understand the reference to MIDI, because my use case has no 
>>> connection to musical notes, it's about arbitrary audio data on which 
>>> MIDI has nothing to say.
>>
>> get/set sample are 'drawing primitives' that are the equivalent of 
>> get/setting a single pixel in images.  Yes, you can draw anything a 
>> pixel at a time, but it's mighty tedious.  You might want to lay down 
>> a tone, or some noise, or shape the sound with an envelope, or do a 
>> whole host of other operations at a higher level than 
>> sample-by-sample, just as canvas supports drawing lines, shapes, and 
>> so on.  That's all I meant by the reference to MIDI.

I see. However, to repeat what I said previously:

audio =/= music.

The direction you're hinting at would truly justify inventing a new 
element, since it sounds like it's specialized to synthesized music. But 
that's a pretty narrow subset of what audio encompasses.

Regarding the tediousness of doing things one sample at a time I agree, 
but maybe it's not as bad as it sounds. It depends on how fast 
JavaScript gets, and Squirrelfish is a very promising step (since the 
developers acknowledge they learnt the lessions from Lua, the next 
acceleration step could be to copy ideas from luajit, the extremely fast 
Lua-to-machine-code JIT compiler). If it gets fast enough, client-side 
libraries could do amazing stuff using sample-at-a-time primitives.

Still, as I suggest above, a few higher-level methods could be useful,


-- Markus

Received on Thursday, 17 July 2008 02:05:42 UTC