W3C home > Mailing lists > Public > public-audio@w3.org > July to September 2014

Re: DynamicsCompressorNode no automatic make up gain

From: Hongchan Choi <choihongchan@gmail.com>
Date: Mon, 7 Jul 2014 22:02:02 -0700
Message-ID: <CAH8-aR3XRBEKq=zw_LkTBdGzXVkhhJkMT+oyg_T6SbX8CNAJcA@mail.gmail.com>
To: Russell McClellan <russell@motu.com>
Cc: Paul Adenot <paul@paul.cx>, public-audio <public-audio@w3.org>
>
> this is a useful feature but not something you'd want all the time.
>

Is asking the "side-chain" feature for the compressor too much? As far as I
can tell, side-chain can't be implemented with the current version of API
(except for ScriptProcessor node) since we do not have direct access on the
level detector of compressor node. Currently GR is all we can get and it
exists only for visualization/metering.

As you already know, auto make-up or side-chain is not really rare in the
real studio setup.

Best,
Hongchan




On Mon, Jul 7, 2014 at 11:24 AM, Russell McClellan <russell@motu.com> wrote:

> As a user of the API, I'd love a switchable make up gain, as this is a
> useful feature but not something you'd want all the time.  Also, it would
> be ideal if the exact implementation of the make-up gain were precisely
> specified.
>
> Thanks,
> -Russell
>
>
> On Mon, Jul 7, 2014 at 1:00 PM, Paul Adenot <paul@paul.cx> wrote:
>
>>  On Mon, Jul 7, 2014, at 06:23 PM, Raymond Toy wrote:
>>
>>
>>
>>
>> On Thu, Jul 3, 2014 at 7:18 PM, Robin Reumers <robinreumers@gmail.com>
>> wrote:
>>
>> Hi all,
>>
>> For educational purposes, I used the DynamicsCompressorNode to teach
>> compression. However, the DynamicsCompressorNode in the Web Audio API uses
>> some sort of automatic make up gain, so it’s not just compressing above a
>> threshold, but actually taking the soft parts up.
>>
>>
>> I don't quite understand what this make-up gain is, but originally the
>> DynamicsCompressor node had an emphasis and de-emphasis filters for high
>> freuqencies that would cause the compressed audio to have a gain applied.
>>  See the discussion at
>> http://lists.w3.org/Archives/Public/public-audio/2014JanMar/0010.html.
>>
>>
>> Make-up gain is a feature present in most hardware and software dynamic
>> compression unit, that adds a gain stage after the compression stage, to
>> bring the level back to where is was, often within a window.
>>
>> This can be useful (and this behaviour is most of the time configurable
>> using an on/off switch), because sometimes, the input signal is simply too
>> loud, and you want to harmonize the levels, but you don't want the gain
>> stage afterwards, because you intend to have another mixing stage where you
>> bring the level up independently of the input level.
>>
>> In other occasions, you just want to harmonize the levels, so the make up
>> gain is an easy way to do that, because it tracks the input level
>> automatically. Think of a singer that just did a voice track, but at some
>> point in the performance, she moved a bit further from the microphone: you
>> just apply a compressor with make up gain, and you've got your levels right
>> (the reality is often more complex than that, but that's the idea).
>>
>> We could implement that either by having a `makeUpGain` property that
>> would default to true.
>> We should be able to implement make up gain using the `reduction`
>> AudioParam, but I'd have to check to make this this is appropriate (in
>> terms of windowing, etc.).
>>
>> That reminds me of the issue where we can't implement side-chaining, but
>> I was planning write a proposal to the group when I'll write the spec text
>> for the DynamicCompressorNode.
>>
>> In any case, I agree this is something we need, especially now that we
>> have more and more complex applications that use real instruments audio
>> tracks, recorded live in the browser (like this year's Google I/O
>> conference). It could certainly be useful for WebRTC as well, as it would
>> make it dead easy to write some custom Web Audio API code that would
>> harmonize levels.
>>
>> Any thoughts?
>>
>> Paul.
>>
>
>


-- 
Hongchan Choi

PhD Candidate, Research Assistant
Center for Computer Research in Music and Acoustics (CCRMA)
Stanford University

http://ccrma.stanford.edu/~hongchan
Received on Tuesday, 8 July 2014 05:02:50 UTC

This archive was generated by hypermail 2.3.1 : Tuesday, 6 January 2015 21:50:14 UTC