- From: Martin Thomson <martin.thomson@gmail.com>
- Date: Thu, 15 Aug 2013 15:44:26 -0700
- To: "Robert O'Callahan" <robert@ocallahan.org>
- Cc: Justin Uberti <juberti@google.com>, Chris Wilson <cwilso@google.com>, "public-media-capture@w3.org" <public-media-capture@w3.org>, Harald Alvestrand <hta@google.com>, Victoria Kirst <vrk@google.com>, Tommy Widenflycht (ᛏᚮᛘᛘᚤ) <tommyw@google.com>, Tommi Gunnarsson <tommi@google.com>
On 15 August 2013 15:23, Robert O'Callahan <robert@ocallahan.org> wrote: > I'm having trouble thinking of a situation where that sort of thing is > better controlled by the application than by the UA. You obviously haven't used Chrome WebRTC with an audio setup like what I have here: two sets of headphones (only one with a mic), speakers on the laptop, speakers on a large monitor, and an occasional USB speakerphone-type device. Under Windows, you can specify separate devices for "normal" use and "communications" use, which helps a little, but still causes some strange audio output behaviour. Generic controls don't work for all application usages, despite the best intentions of the generic implementation. That's why Skype manages its own preferences. Having some sort of control over this, even if it is just to ensure that choices are sticky over time for the same application, makes a lot of sense.
Received on Thursday, 15 August 2013 22:44:54 UTC