W3C home > Mailing lists > Public > public-audio@w3.org > January to March 2013

Re: channel layouts and up/down mixing

From: Ralph Giles <giles@mozilla.com>
Date: Tue, 15 Jan 2013 17:21:01 -0800
Message-ID: <50F6007D.7080908@mozilla.com>
To: public-audio@w3.org
On Mon, 14 Jan 2013 10:32:43, Chris Rogers wrote:

> I think these are fairly standard down-mixing coefficients, mostly
> coming from the ITU-R standards.

Can you be more specific about what standards you're referencing here?

I'm quite confused by the downmix matricies as well. It's clear that
upmixing occurs whenever multiple inputs with different numbers of
channels are connected to the same node, and upmixing is defined as
straightforward channel doubling or passthrough. That's fine.

But when are the downmix matricies used? There's no clear reference to
them other than "for playback". Some more clarity would be useful here.

NB I've recently been working surround downmixing for playback in
Firefox. We've been using different matricies, e.g for 5.1->stereo:

                                                             / FL  \
                                                             | FR  |
/ L \        / 1 1/sqrt(2) 0 1/sqrt(2) sqrt(3)/2   1/2     \ |  C  |
\ R / = Norm \ 0 1/sqrt(2) 1 1/sqrt(2)    1/2    sqrt(3)/2 / | LFE |
                                                             | SL  |
                                                             \ SR  /

Where 'Norm' is chosen so the coefficients of each row sum to 2.0, as a
compromise between clipping and typical mixing levels.

I see web content can implement any downmix function they like using the
split, join, gain, etc. nodes so the exact form of the standard matrix
isn't worth arguing over. I'm just unclear where we're specifying it at all.

 -r
Received on Wednesday, 16 January 2013 01:21:29 UTC

This archive was generated by hypermail 2.4.0 : Friday, 17 January 2020 19:03:16 UTC