W3C home > Mailing lists > Public > whatwg@whatwg.org > July 2008

[whatwg] Audio canvas?

From: Dr. Markus Walther <walther@svox.com>
Date: Wed, 16 Jul 2008 15:17:58 +0200
Message-ID: <487DF506.1070505@svox.com>

 >> My understanding of HTMLMediaElement is that the currentTime, volume
 >> and playbackRate properties can be modified live.
 >>
 >> So in a way Audio is already like Canvas : the developer modify things
 >> on the go. There is no automated animations/transitions like in SVG
 >> for instance.
 >>
 >> Doing a cross fade in Audio is done exactly the same way as in Canvas.

That's not what I described, however. Canvas allows access to the most 
primitive element with which an image is composed, the pixel. Audio does 
not allow access to the sample, which is the equivalent of pixel in the 
sound domain. That's a severe limitation. Using tricks with data URIs 
and a known simple audio format such as PCM WAVE is no real substitute, 
because JavaScript strings are immutable.

It is unclear to me why content is still often seen as static by default 
- if desktop apps are moved to the browser, images and sound will 
increasingly be generated and modified on-the-fly, client-side.

 > And if you're thinking special effects ( e.g.: delay, chorus, flanger,
 > pass band, ... ) remember that with Canvas, advanced effects require
 > trickery and to composite multiple Canvas elements.

I have use cases in mind like an in-browser audio editor for music or 
speech applications (think 'Cooledit/Audacity in a browser'), where 
doing everything server-side would be prohibitive due to the amount of 
network traffic.

--Markus
Received on Wednesday, 16 July 2008 06:17:58 UTC

This archive was generated by hypermail 2.4.0 : Wednesday, 22 January 2020 16:59:03 UTC