Re: Proposal for output device selection

I think there are multiple independent vectors for audio output selection:

   1. Channel/speaker model selection: "I want this to go to the center
   speaker/this is a 5.1 sound clip".  The Web Audio spec defines this for
   mono/stereo/4-ch/5-ch layouts:
   https://dvcs.w3.org/hg/audio/raw-file/tip/webaudio/specification.html#ChannelLayouts
   .
   2. Semantic role of audio - this is the "background music" vs "game
   audio" vs "voice call" definition - the "logical channel".  For example,
   I'd like to have my phone route voice calls to my BT headset, but music
   playback to the BT car connection, and game sound to the device speakers
   (so my daughter, playing games on my phone in the back seat, doesn't
   disrupt my call or stop the music).  Y'know, hypothetically.  :)
   3. Big ol' pile of channels - the music production case.  I have a
   minimal version of this on my desk at work - I want to be able to change
   routings because most of the time I've got a headset on but occasionally
   want to switch to a different output that's speaker-based to demo something.

I think #1 is solved for Web Audio, and likely (haven't tested) works for
<audio> (it should, anyway).  #2 is the focus of the mobile-necessary (but
not inapplicable to desktop!) Mozilla proposal; it's also related to the
single "default audio device" model in Web Audio today.  #3 is a different
beast to me, and might (as Rob suggests) have privacy implications to
expose all devices; however, it's still a requirement for even
middling-complexity audio scenarios.

Most desktops - for input and output devices, both audio and video (for
input) define a single "default device", but not a semantic collection of
devices.  I think semantic roles make sense for #2, but not for #3.



On Fri, Aug 16, 2013 at 9:49 AM, Justin Uberti <juberti@google.com> wrote:

> Yes, I forgot to spell out how the application would route its output to
> left/center/right audio devices.
>
> Regardless, if this approach is applicable for input devices, I don't see
> why we would want a different model for output.
>
> On Thu, Aug 15, 2013 at 6:58 PM, Robert O'Callahan <robert@ocallahan.org>wrote:
>
>> On Fri, Aug 16, 2013 at 11:14 AM, Justin Uberti <juberti@google.com>wrote:
>>
>>> Different applications may want to have different UIs to control these
>>> settings. One application may just want to control a single camera and
>>> audio device. Another application may want to have multiple cameras all
>>> used in concert, and allow the left/right/center camera/mic devices to be
>>> individually selected.
>>>
>>
>> You seem to be talking about input devices. I thought we were talking
>> about output.
>>
>> Rob
>> --
>> Jtehsauts  tshaei dS,o n" Wohfy  Mdaon  yhoaus  eanuttehrotraiitny  eovni
>> le atrhtohu gthot sf oirng iyvoeu rs ihnesa.r"t sS?o  Whhei csha iids  teoa
>> stiheer :p atroa lsyazye,d  'mYaonu,r  "sGients  uapr,e  tfaokreg iyvoeunr,
>> 'm aotr  atnod  sgaoy ,h o'mGee.t"  uTph eann dt hwea lmka'n?  gBoutt  uIp
>> waanndt  wyeonut  thoo mken.o w  *
>> *
>>
>
>

Received on Friday, 16 August 2013 17:10:09 UTC