- From: Chris Lowis <chris.lowis@gmail.com>
- Date: Mon, 14 Jul 2014 15:07:13 +0100
- To: Paul Adenot <paul@paul.cx>, Audio WG <public-audio@w3.org>
> That reminds me of the issue where we can't implement > side-chaining, but I was planning write a proposal to the group > when I'll write the spec text for the DynamicCompressorNode. > > In any case, I agree this is something we need, especially now > that we have more and more complex applications that use real > instruments audio tracks, recorded live in the browser (like > this year's Google I/O conference). It could certainly be > useful for WebRTC as well, as it would make it dead easy to > write some custom Web Audio API code that would harmonize > levels. > > Any thoughts? This is one place where we could really use some tests for the implementations. I think there’s a lot of approaches to take for compression, and no accepted “standard” or obvious implementation (contrasted to the GainNode for example, where, modulo some edge cases it’s fairly intuitive what the behaviour should be). Tests would be a great way for people to communicate the changes they’d like to see with respect to make up gain, channel counts, side-chaining and so on. It means we’re all taking in a common language irrespective of our own backgrounds in audio processing. I’d encourage the group to take a look at the Waveshaper node tests for some inspiration - they’re very readable: https://github.com/w3c/web-platform-tests/blob/master/webaudio/the-audio-api/the-waveshapernode-interface/curve-tests.html My blog post on writing tests for the web audio api also provides some background and advice: http://blog.chrislowis.co.uk/2014/04/30/testing-web-audio.html Cheers, Chris
Received on Monday, 14 July 2014 14:07:44 UTC