W3C home > Mailing lists > Public > public-audio@w3.org > January to March 2012

Re: Web Audio API channel handling

From: Chris Rogers <crogers@google.com>
Date: Wed, 15 Feb 2012 10:50:01 -0800
Message-ID: <CA+EzO0mCYOVFUh-eLUo=vyt+RPKha7WBkiq0jCViwefwZGr-tg@mail.gmail.com>
To: Michael Schöffler <michael.schoeffler@audiolabs-erlangen.de>
Cc: public-audio@w3.org
On Tue, Feb 14, 2012 at 3:11 AM, Michael Schöffler <
michael.schoeffler@audiolabs-erlangen.de> wrote:

> I don’t want to set to focus of the “Standardizing audio "level 1"
> features first?” issue on channel handling, so I decided to start a new
> thread.****
>
> ** **
>
> Here are some other issues, that I have noticed during development. Maybe
> they’re helpful.****
>
> ** **
>
> - Six channel limitation of Splitter/Merger
> Currently every splitter or merger is limited to 6 channels. If you have
> to merge/split lots of channels (e.g. 40), you need more than one
> merger/splitters to handle it. This is not a big thing, because it is only
> a limitation by a constant in the source code. But maybe it would be a good
> thing to set the number of split/merged channels by the API.
>

I totally agree.


> My suggestion is to add a new parameter to
> context.createChannelSplitter().
>

Yes, this seems like the right approach.


> An alternative would be to increase the value of the constant (e.g. to
> 1024). But I think the numberOfInputs-attribute of a merger shouldn’t  have
> a value of 1024. Also it could be possible that the developer wants to
> check of the number inputs the merger is capable.****
>
> ** **
>
> - Upmixing even if a merger is used
> Yesterday I set up two configurations (
> http://h9.abload.de/img/updownmix6exmy.png <= I hope this scheme explains
> it well). If an merger is used, there is anyway an upmixing (mono to
> stereo). I was able to follow the WebCore source code and I had no problems
> to understand it. But I just want to say that I expected another behavior.
> As workaround I used an AudioNode that writes a mute signal to the buffer.
> For me it feels sometimes, that handling with channels (in the way I do)
> means also handling with a lot of workarounds. Therefore I would be
> interested in other opinions/experiences about the channel handling J
>

Great diagrams!  I see that your second test worked as expected, so I'll
talk about the first case.  My apologies for not being more clear in the
spec.  I tried my best to illustrate the design in the diagrams, since I
gave two examples, one with two inputs (merging to a two-channel output
stream), and the next example with six inputs (merging into a six channel
stream - which could be interpreted as 5.1).  In your case, you're only
connecting *one* input, so it will generate a stream with one channel.  To
get what you wanted, you'd have to connect another mono source node to the
second input.  Thus you'd have two inputs and would generate a single
output with two channels.  Of course you'd want silence to appear at the
second input.  You can simply use an AudioBufferSourceNode which hasn't
been configured with any buffer data (no noteOn() call, etc.) and connect
that as to the second input.

In short, the design is that the merger will generate as many channels as
the combined number of channels of all the inputs. If each input is mono,
then this will equal the number of inputs connected...



> ****
>
> ** **
>
>
> Btw. Chris this about WebCL didn't sound as negative to me, I’m just very
> interested in other opinions. Thank you for yours!****
>
> ** **
>

And I really appreciate yours!
Cheers,
Chris
Received on Wednesday, 15 February 2012 18:50:33 GMT

This archive was generated by hypermail 2.2.0+W3C-0.50 : Wednesday, 15 February 2012 18:50:35 GMT