W3C home > Mailing lists > Public > public-media-capture@w3.org > August 2013

Re: Proposal for output device selection

From: Harald Alvestrand <hta@google.com>
Date: Fri, 16 Aug 2013 11:43:59 +0200
Message-ID: <CAOqqYVFZ1yF6=J3Be7-dsd8v7-+SbmtQTBEn-LXacR+zFB_C2w@mail.gmail.com>
To: robert@ocallahan.org
Cc: Chris Wilson <cwilso@google.com>, Martin Thomson <martin.thomson@gmail.com>, Justin Uberti <juberti@google.com>, "public-media-capture@w3.org" <public-media-capture@w3.org>, Victoria Kirst <vrk@google.com>, Tommy Widenflycht (ᛏᚮᛘᛘᚤ) <tommyw@google.com>, Tommi Gunnarsson <tommi@google.com>
The users will be best served if they can access an UA specifically
addressing what they want to do - in the WebRTC conferencing case, "Give me
the sound from the chat over the headphones!" - while at the same time
expecting to unmute his Spotify after the call and have that continue to
appear on his hi-fi speakers.

An UA-embedded API has no idea what "the sound from the chat" means, so
can't make an UI that helps the user do the Right Thing.

The UA needs to give JS the ability to tell the output devices apart, so
that even if names are cryptic (like "MX4020 Stereo", or even "12497"),
they stay constant for the user, and stop there.

Pushing output device selection into browser chrome is a Bad Idea - as we
showed by pushing input device selection into browser chrome in Chrome a
few versions ago.

(Note: Default audio output device selection is an operating system. We
shouldn't move that into the UA either.)

My opinion.

On Fri, Aug 16, 2013 at 4:24 AM, Robert O'Callahan <robert@ocallahan.org>wrote:

> On Fri, Aug 16, 2013 at 11:22 AM, Chris Wilson <cwilso@google.com> wrote:
>> In particular, the digital audio workstation type case - or any music app
>> that wants access to multiple hardware interfaces, like a DJ app that has a
>> cue output as well as a mains output - typically has to leave this up to
>> the user, since it's hard to semantically define the "roles" of different
>> devices (sometimes there's no semantic difference - I just have two
>> four-track interfaces, and I want to have eight tracks of output, etc.)
> One way to address this kind of use-case might be to allow applications to
> define their own logical output devices.
> Another thing that might help is to add APIs to grab the current output
> configuration into an opaque data object which can be persistently stored
> locally (e.g. via IndexedDB), and which can be used to restore the current
> configuration, but which can't be sent anywhere.
> Given that, your hypothetical DJ app would be able to define logical "cue"
> and "mains" outputs, the UA would hook those up to output devices and allow
> the user to control that mapping, and the app could save and restore those
> settings and associate them with particular application-specific contexts.
> Would that help?
> Rob
> --
> Jtehsauts  tshaei dS,o n" Wohfy  Mdaon  yhoaus  eanuttehrotraiitny  eovni
> le atrhtohu gthot sf oirng iyvoeu rs ihnesa.r"t sS?o  Whhei csha iids  teoa
> stiheer :p atroa lsyazye,d  'mYaonu,r  "sGients  uapr,e  tfaokreg iyvoeunr,
> 'm aotr  atnod  sgaoy ,h o'mGee.t"  uTph eann dt hwea lmka'n?  gBoutt  uIp
> waanndt  wyeonut  thoo mken.o w  *
> *
Received on Friday, 16 August 2013 09:44:46 UTC

This archive was generated by hypermail 2.4.0 : Friday, 17 January 2020 16:26:18 UTC