W3C home > Mailing lists > Public > public-audio-dev@w3.org > March 2013

Microphone input

From: Nora Petit de la Villéon <garrayfigura@hotmail.fr>
Date: Fri, 15 Mar 2013 20:15:12 +0100
Message-ID: <DUB108-W2173FF2833BB46C8C663CFAAED0@phx.gbl>
To: <public-audio-dev@w3.org>


hello every body,
sorry i am absolutely newbie in this API (and my english is not very good....)
i see that it is possible to record voice in a microphone but the demo i try to use does not seem to work.
does it work?
Only Chrome? Windows? Mac?
send it to a serveur is simply httprequest?
Sorry for this question that i think is stupid for all of you.
thanks a lot
Nora (from Paris)

Date: Fri, 15 Mar 2013 12:00:54 -0700
From: crogers@google.com
To: me@jory.org
CC: public-audio-dev@w3.org
Subject: Re: Multi-channel hardware support available for testing in Chrome Canary

On Fri, Mar 15, 2013 at 11:53 AM, Jory <me@jory.org> wrote:

Ah, so it was the panner that was messing with me. Are plans to make the panner into a steerable multichannel one? I'm sure I won't be the only audio developer to get thrown by this, as I'd have expected it behave quite differently. I guess for the moment, custom panning code has to be written to handle surround steering.

It's certainly possible to consider extending the PannerNode, but for now it's stereo, so yes you'll need to matrix the channels yourself, and there are many interesting ways to do it.

By the way, another way to output multiple channels very simply is to create your own AudioBuffer manually (not decode an existing audio file) having say 4 channels.  Put some interesting data into the AudioBuffer and play it back.  Or another simple way is to create a ScriptProcessorNode with say 4 output channels and render whatever you want there.  These would not require a ChannelMergerNode.


I'll give your suggestions a try in a bit. 




On Mar 15, 2013, at 11:08, Chris Wilson <cwilso@google.com> wrote:

I will be writing a demo, yes.  As Chris said, to utilize the multi-channel output you really want to use ChannelMergerNodes - unless you're playing a source buffer node that is multichannel. 

I'd point out text from the spec on AudioPannerNode: "The output of this node is hard-coded to stereo (2 channels) and currently cannot be configured."  You won't get surround-sound placement from Panners, at least not based on the current spec - similarly ConvolverNodes will never output more than two channels.  If you run a 6-channel source through one of those nodes, it would get downsampled to stereo.  This is mentioned in their respective sections, but also in Section 9:

// PannerNode and ConvolverNode are like this by default.
pannerNode.channelCount = 2;
pannerNode.channelCountMode = "clamped-max";
pannerNode.channelInterpretation = "speakers";

As for a 6-channel WAV file - well, I don't have a 6-channel system handy, but it should work.  I tested decoding a 6-channel WAV (sample from McGill: http://www-mmsp.ece.mcgill.ca/documents/AudioFormats/WAVE/Samples.html) and it worked fine.  As in the spec, if you had 5.1 hardware, you'd have to set this to get it to output correctly:

// Set “hardware output” to 5.1
context.destination.channelCount = 6;
context.destination.channelCountMode = "explicit";
context.destination.channelInterpretation = "speakers";
Then connect the 6-channel BufferSourceNode to the context.destination.

The output.L, .R, .C in section 9.1 are misleading you - that is effectively pseudocode detailing how up/downmixing should be done between layouts..  Your "forced mono gets sent to the Center channel" is shown in that section:
    1 -> 5.1 : up-mix from mono to 5.1
        output.L = 0;
        output.R = 0;
        output.C = input; // put in center channel
        output.LFE = 0;
        output.SL = 0;
        output.SR = 0;

Simplistic version: you either need to 

work with original multi (>2) channel sources and set your "hardware output" to interpret as speakers - which case you need to be careful with nodes that downsample (Convolver and Panner), or 
set interpretation to "discrete", and make sure you're sending the correct number of channels to the output to fill up all the channels.  This would use mergers, and would also be the way you would implement a digital DJ app or other multiple-paired-stereo-outputs case (e.g. a DAW).
Either way, of course, you'll want to set the context.destination.channelCount to the desired number of channels, and the channelCountMode to "explicit".

Hope that helps,

On Fri, Mar 15, 2013 at 10:47 AM, Jory <me@jory.org> wrote:

I spent a lot of time reading through that part of the spec lat night, but to no avail. I didn't try the ChannelMergerNode, so I'll give that a shot.

Something that was completely unclear to me was what output.L, output.R, output.C, etc referred to. Where is output coming from? I didn't see anything like that in any of the examples elsewhere, nor could I found anything that subdivided channels inside the Inspector in Canary.

Also, should multichannel audio files play? If so, what formats?



On Mar 15, 2013, at 10:34, Chris Rogers <crogers@google.com> wrote:

On Thu, Mar 14, 2013 at 11:11 PM, Jory <me@jory.org> wrote:

On Thu, 14 Mar 2013 15:53:50 -0700, Chris Rogers wrote:
> For those interested in multi-channel output in the Web Audio API, here's
> your chance to try out an early build.  This could be of interest for
> digital DJ type applications, or rendering to multi-channel speaker
> installations...
> Please note this is an early build and little testing has yet been made.
>  So far I've tested on several devices on OSX.  I'm interested in your
> feedback!
> The .maxChannelCount attribute is now exposed to show the actual number of
> hardware channels:
> The .channelCount attribute should now be settable on the
> AudioDestinationNode to values up to .maxChannelCount
> Cheers,
> Chris

Well, I've spent a couple hours playing around, since multichannel
audio certainly interests me, both for games and music. I've had very
little success, though.

I'm finding it very difficult to get any audio to play out anything but
the Left and Right channels. The only success I've had was forcing a
down-mix to 1 channel, where sound finally came out my Center channel.
So clearly the system functions in some manner. I've tried setting up a
3D panner and hard-coding positional panning, but sound never leaves my
Lft-Rht channels.

Hi Jory, I believe Chris Wilson will be writing an article with more details and complete sample code.  In the meantime, you can look at some of the partial example code here:

Just to add a little more detail, with the "digital DJ" example you'd want to use a ChannelMergerNode to combine two separate stereo mixes.  The first one would be connected to input 0 of the merger and the 2nd to input 1.  Then the merger would be connected to the destination as configured in the partial example code.

The ChannelMergerNode turns out to be really important for these types of applications.  In another example, if your hardware supported 8 channels of output, you could configure it as 8-channel discrete, then create a merger and connect eight independent mono AudioNodes to inputs 0 - 7 of the merger.  Then you could hookup eight speakers and place them anywhere in the room you like.  And the mono sources could be whatever you want - maybe 8 de-correlated channels of white noise...

Hope that helps.


I also tried playing back a 6-channel interleaved WAV file, but while
the file appears to "play", sound doesn't come from any speaker, no
matter how I try to configure the channelCount.

I'm beginning to wonder if 6-channel source files are playable at this
time or if that's unsupported. (Everything seems to have choked when I
supplied a 6-channel AAC file, but I'm not even 100% the file I
supplied was even playable, since getting multi-channel AAC output is
kinda challenging with today's tools.) Also, a bit of sample code would
do my morale wonders right about now. :-)




Received on Monday, 18 March 2013 09:33:23 UTC

This archive was generated by hypermail 2.3.1 : Tuesday, 6 January 2015 19:43:28 UTC