- From: Janina Sajka <janina@rednote.net>
- Date: Tue, 20 Aug 2013 13:32:28 -0400
- To: Harald Alvestrand <harald@alvestrand.no>
- Cc: public-html@w3.org, Justin Uberti <juberti@google.com>, Stefan Hakansson LK <stefan.lk.hakansson@ericsson.com>
I am writing to note that the HTML-A11Y TF identified output device
selection as an a11y requirement at Sec. 4.8 of its "Media Accessibility
User Requirements" document:
http://www.w3.org/TR/media-accessibility-reqs/#requirements-on-the-parallel-use-of-alternate-content-on-potentially-------multiple-devices-in-parallel
I would hasten to admit we did not articulate this requirement as
clearly as we undoubtedly should before finalizing our document as a W3C
Note. So, I would highlight two points:
1.) In addition to selecting a particular device for output of any associated content stream, we seek the
ability to identify multiple devices as output targets for a particular
content stream.
2.) We would expect content in all devices to stay synchronized.
Janina
Harald Alvestrand writes:
> The proposal below was recently made to the WebRTC Media Capture Task Force.
>
> A discussion with chairs and staff indicated that this is possibly a
> better fit with the scope of the
> HTML working group; therefore I'm forwarding the proposal here.
>
> Commentary on the Media Capture list has been largely supportive of
> the concept, but we have some worries about the need for
> fingerprinting protection and possibly asking the user for
> permission to get information about the devices attached to the
> user's computer.
>
> Comments (including comments on where it should be worked on) welcome!
>
> Harald Alvestrand, chair, WebRTC Media Capture Task Force
>
>
> -------- Original Message --------
> Subject: Proposal for output device selection
> Resent-Date: Mon, 12 Aug 2013 23:44:13 +0000
> Resent-From: public-media-capture@w3.org
> Date: Mon, 12 Aug 2013 16:43:25 -0700
> From: Justin Uberti <juberti@google.com>
> To: public-media-capture@w3.org <public-media-capture@w3.org>
> CC: Harald Alvestrand <hta@google.com>, Victoria Kirst
> <vrk@google.com>, Tommy Widenflycht (ᛏᚮᛘᛘᚤ) <tommyw@google.com>,
> Tommi Gunnarsson <tommi@google.com>
>
>
>
> WG,
>
> With the work done on MediaStreamTrack.getSources
> <http://MediaStreamTrack.getSources> (previously known as
> getSourceInfos/getSourceIds), we now have a clean way to enumerate,
> select, and remember audio and video input devices. However, we are
> not yet able to do this for audio output devices. This is a
> significant problem for scenarios where you want audio output to go
> to a headset that is not the default device, e.g. a USB or Bluetooth
> headset.
>
> Note that this goes outside the bounds of WebRTC - once we have an
> output device ID, we need a way to tell other HTML5 APIs, such as an
> <audio/> tag or Web Audio context, to use the specified device.
> Therefore locating this output device enumeration API on
> MediaStreamTrack (similar to getSources) is probably not the right
> fit.
>
> We therefore propose the navigator.getMediaSinks method as an API to
> use for output device enumeration, and a new HTMLMediaElement.sinkId
> property to allow the ids returned by getMediaSinks to be supplied
> to <audio/> and <video/> tags. (See full details below)
>
> getMediaSinks works overall similarly to getSources - it
> asynchronously returns a list of objects that identify devices, and
> for privacy purposes the .label properties are not filled in unless
> the user has consented to device access through getUserMedia.
>
> If we like this design, it may make sense to move
> MediaStreamTrack.getSources to also be on the navigator object, for
> consistency.
>
> ----------------------------------------------------------------------------------------------
>
> *New enumeration API*
>
> // async API, returns results through SinkInfoCallback
> void navigator.getMediaSinks(SinkInfoCallback)
>
> // similar to SourceInfoCallback
> callback SinkInfoCallback = void (sequence<SinkInfo>)
>
> // similar to SourceInfo
> dictionary SinkInfo {
> DOMString sinkId;
> DOMString kind;
> DOMString label;
> };
>
> *New API on HTMLMediaElement*
>
> // when set, specifies the desired audio output device to use
> DOMString HTMLMediaElement.sinkId
>
> *Usage*
>
> // print out the available audio output devices on the console
> function listAudioDevices() {
> navigator.getMediaSinks(printAudioDevices);
> }
>
> function printAudioDevices(sinks) {
> for (var i = 0; i < sinks.length; ++i) {
> if (sinks[i].kind === "audio") {
> console.log(sinks[i].sinkId + " : " + sinks[i].label);
> }
> }
> }
> *
> *
> *// *set the audio output for the <audio/> tag with the id "audio-renderer"
> function selectAudioOutput(sinkId) {*
> *
> **document.getElementByID("audio-renderer").sinkId = sinkId;
> }
>
>
>
>
--
Janina Sajka, Phone: +1.443.300.2200
sip:janina@asterisk.rednote.net
Email: janina@rednote.net
Linux Foundation Fellow
Executive Chair, Accessibility Workgroup: http://a11y.org
The World Wide Web Consortium (W3C), Web Accessibility Initiative (WAI)
Chair, Protocols & Formats http://www.w3.org/wai/pf
Indie UI http://www.w3.org/WAI/IndieUI/
Received on Tuesday, 20 August 2013 17:32:57 UTC